Feb 17 00:22:52 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 00:22:52 crc restorecon[4738]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 00:22:52 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 17 00:22:53 crc restorecon[4738]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 17 00:22:54 crc kubenswrapper[4805]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.495156 4805 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503407 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503440 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503450 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503461 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503470 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503479 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503489 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503498 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503507 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503515 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503526 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503548 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503556 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503565 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503573 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503581 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503589 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503599 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503609 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503619 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503629 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503637 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503645 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503653 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503661 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503669 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503676 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503684 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503691 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503699 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503707 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503715 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503722 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503736 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503743 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503751 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503759 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503766 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503776 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503786 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503797 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503805 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503814 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503822 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503830 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503854 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503880 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503889 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503897 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503905 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503912 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503920 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503928 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503936 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503943 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503950 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503959 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503966 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503974 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503983 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.503992 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504002 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504010 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504018 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504025 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504033 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504040 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504048 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504100 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504111 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.504123 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504246 4805 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504262 4805 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504277 4805 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504287 4805 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504299 4805 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504309 4805 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504320 4805 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504366 4805 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504376 4805 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504385 4805 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504395 4805 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504405 4805 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504416 4805 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504425 4805 flags.go:64] FLAG: --cgroup-root="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504434 4805 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504443 4805 flags.go:64] FLAG: --client-ca-file="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504451 4805 flags.go:64] FLAG: --cloud-config="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504460 4805 flags.go:64] FLAG: --cloud-provider="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504468 4805 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504479 4805 flags.go:64] FLAG: --cluster-domain="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504487 4805 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504497 4805 flags.go:64] FLAG: --config-dir="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504505 4805 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504515 4805 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504526 4805 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504535 4805 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504544 4805 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504553 4805 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504562 4805 flags.go:64] FLAG: --contention-profiling="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504572 4805 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504580 4805 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504591 4805 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504600 4805 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504610 4805 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504619 4805 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504628 4805 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504636 4805 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504646 4805 flags.go:64] FLAG: --enable-server="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504655 4805 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504666 4805 flags.go:64] FLAG: --event-burst="100" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504675 4805 flags.go:64] FLAG: --event-qps="50" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504710 4805 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504720 4805 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504729 4805 flags.go:64] FLAG: --eviction-hard="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504739 4805 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504748 4805 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504757 4805 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504766 4805 flags.go:64] FLAG: --eviction-soft="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504777 4805 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504786 4805 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504795 4805 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504804 4805 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504813 4805 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504822 4805 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504831 4805 flags.go:64] FLAG: --feature-gates="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504842 4805 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504851 4805 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504860 4805 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504870 4805 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504879 4805 flags.go:64] FLAG: --healthz-port="10248" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504888 4805 flags.go:64] FLAG: --help="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504898 4805 flags.go:64] FLAG: --hostname-override="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504907 4805 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504917 4805 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504926 4805 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504935 4805 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504944 4805 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504952 4805 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504962 4805 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504970 4805 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504979 4805 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504989 4805 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.504998 4805 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505006 4805 flags.go:64] FLAG: --kube-reserved="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505016 4805 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505024 4805 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505033 4805 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505042 4805 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505051 4805 flags.go:64] FLAG: --lock-file="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505059 4805 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505069 4805 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505078 4805 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505090 4805 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505099 4805 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505109 4805 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505118 4805 flags.go:64] FLAG: --logging-format="text" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505127 4805 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505136 4805 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505144 4805 flags.go:64] FLAG: --manifest-url="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505153 4805 flags.go:64] FLAG: --manifest-url-header="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505164 4805 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505173 4805 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505185 4805 flags.go:64] FLAG: --max-pods="110" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505194 4805 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505202 4805 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505212 4805 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505221 4805 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505231 4805 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505240 4805 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505249 4805 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505267 4805 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505276 4805 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505285 4805 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505295 4805 flags.go:64] FLAG: --pod-cidr="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505303 4805 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505316 4805 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505358 4805 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505368 4805 flags.go:64] FLAG: --pods-per-core="0" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505376 4805 flags.go:64] FLAG: --port="10250" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505386 4805 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505395 4805 flags.go:64] FLAG: --provider-id="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505404 4805 flags.go:64] FLAG: --qos-reserved="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505412 4805 flags.go:64] FLAG: --read-only-port="10255" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505422 4805 flags.go:64] FLAG: --register-node="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505430 4805 flags.go:64] FLAG: --register-schedulable="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505439 4805 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505454 4805 flags.go:64] FLAG: --registry-burst="10" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505462 4805 flags.go:64] FLAG: --registry-qps="5" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505471 4805 flags.go:64] FLAG: --reserved-cpus="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505480 4805 flags.go:64] FLAG: --reserved-memory="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505492 4805 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505501 4805 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505510 4805 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505518 4805 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505528 4805 flags.go:64] FLAG: --runonce="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505537 4805 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505546 4805 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505555 4805 flags.go:64] FLAG: --seccomp-default="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505564 4805 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505573 4805 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505583 4805 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505592 4805 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505601 4805 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505609 4805 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505618 4805 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505627 4805 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505636 4805 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505645 4805 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505654 4805 flags.go:64] FLAG: --system-cgroups="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505663 4805 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505677 4805 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505685 4805 flags.go:64] FLAG: --tls-cert-file="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505694 4805 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505704 4805 flags.go:64] FLAG: --tls-min-version="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505712 4805 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505721 4805 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505730 4805 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505739 4805 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505748 4805 flags.go:64] FLAG: --v="2" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505760 4805 flags.go:64] FLAG: --version="false" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505771 4805 flags.go:64] FLAG: --vmodule="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505782 4805 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.505792 4805 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.505985 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.505994 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506003 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506012 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506022 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506031 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506039 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506055 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506065 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506075 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506084 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506092 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506100 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506107 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506116 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506123 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506130 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506138 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506145 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506154 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506162 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506172 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506181 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506190 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506200 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506210 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506220 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506230 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506238 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506247 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506256 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506264 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506272 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506279 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506287 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506295 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506302 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506310 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506317 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506354 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506363 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506370 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506378 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506385 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506394 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506401 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506409 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506416 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506424 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506433 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506440 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506448 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506456 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506464 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506471 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506479 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506487 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506494 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506501 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506509 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506516 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506524 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506532 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506539 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506547 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506555 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506562 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506570 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506577 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506585 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.506593 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.506609 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.520780 4805 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.520831 4805 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.520960 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.520979 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.520988 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.520997 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521007 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521015 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521023 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521031 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521038 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521046 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521058 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521070 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521084 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521100 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521112 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521122 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521132 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521141 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521150 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521160 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521169 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521180 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521190 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521198 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521205 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521214 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521222 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521231 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521238 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521246 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521256 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521263 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521271 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521279 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521287 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521295 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521354 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521363 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521371 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521379 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521387 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521398 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521408 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521417 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521425 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521434 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521442 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521450 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521457 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521465 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521473 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521480 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521488 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521496 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521503 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521514 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521524 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521533 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521543 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521551 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521559 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521568 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521576 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521584 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521592 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521601 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521608 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521617 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521625 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521633 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521641 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.521654 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521885 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521900 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521909 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521918 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521927 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521935 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521943 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521951 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521959 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521967 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521975 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521982 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521990 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.521997 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522006 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522015 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522022 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522029 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522037 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522044 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522052 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522060 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522067 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522075 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522085 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522094 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522101 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522111 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522121 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522130 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522140 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522149 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522158 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522166 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522174 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522182 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522191 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522198 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522206 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522213 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522222 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522231 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522241 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522251 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522260 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522268 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522278 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522289 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522298 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522306 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522316 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522354 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522363 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522371 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522380 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522388 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522396 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522406 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522414 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522423 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522430 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522438 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522445 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522453 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522460 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522468 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522475 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522483 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522492 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522500 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.522507 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.522519 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.524159 4805 server.go:940] "Client rotation is on, will bootstrap in background" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.534558 4805 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.534688 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.536799 4805 server.go:997] "Starting client certificate rotation" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.536839 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.537109 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-12 20:24:05.428807093 +0000 UTC Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.537233 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.563215 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.566397 4805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.576075 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.591270 4805 log.go:25] "Validated CRI v1 runtime API" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.629471 4805 log.go:25] "Validated CRI v1 image API" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.632243 4805 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.638555 4805 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-17-00-18-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.638606 4805 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.666105 4805 manager.go:217] Machine: {Timestamp:2026-02-17 00:22:54.663305992 +0000 UTC m=+0.679115430 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c46f5e1f-50b9-4331-9140-c12e3ad03920 BootID:0a3d29de-c011-49cf-a4c7-02d3c97ac2d5 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1b:62:63 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1b:62:63 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d9:2e:45 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:22:62:60 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:12:b9:a0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e7:10:5b Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:80:53:7e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d6:2e:84:9a:45:8f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ca:ee:12:93:67:a7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.666521 4805 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.666710 4805 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.669422 4805 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.669894 4805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.669973 4805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.670505 4805 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.670534 4805 container_manager_linux.go:303] "Creating device plugin manager" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.671091 4805 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.671168 4805 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.671582 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.671914 4805 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.678754 4805 kubelet.go:418] "Attempting to sync node with API server" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.678799 4805 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.678839 4805 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.678866 4805 kubelet.go:324] "Adding apiserver pod source" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.678887 4805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.683048 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.683146 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.683162 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.683382 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.688409 4805 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.689687 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.692900 4805 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694402 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694429 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694440 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694448 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694463 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694473 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694482 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694496 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694506 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694514 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694526 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.694535 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.696506 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.697041 4805 server.go:1280] "Started kubelet" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.697434 4805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.703367 4805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.704392 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.705507 4805 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 00:22:54 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.706859 4805 server.go:460] "Adding debug handlers to kubelet server" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.708753 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.708860 4805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.713014 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:14:29.499096881 +0000 UTC Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.713178 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.713275 4805 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.713288 4805 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.713442 4805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.713963 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.714500 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.714598 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.713551 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894e0d8adcea04e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:22:54.697005134 +0000 UTC m=+0.712814542,LastTimestamp:2026-02-17 00:22:54.697005134 +0000 UTC m=+0.712814542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715102 4805 factory.go:55] Registering systemd factory Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715149 4805 factory.go:221] Registration of the systemd container factory successfully Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715733 4805 factory.go:153] Registering CRI-O factory Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715775 4805 factory.go:221] Registration of the crio container factory successfully Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715911 4805 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715952 4805 factory.go:103] Registering Raw factory Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.715982 4805 manager.go:1196] Started watching for new ooms in manager Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.717907 4805 manager.go:319] Starting recovery of all containers Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730065 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730156 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730196 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730232 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730258 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730294 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730358 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730384 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730413 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730441 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730463 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730481 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730498 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730523 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730574 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730610 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730647 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730672 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730695 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730718 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730742 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730774 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730797 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730822 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730846 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730873 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730901 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730932 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730970 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.730992 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731008 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731026 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731047 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731073 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731091 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731108 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731127 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731143 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731160 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731177 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731195 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731214 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731238 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731256 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731274 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731292 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731310 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731384 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731431 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731459 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731491 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731508 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731538 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731559 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731577 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731602 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731623 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731641 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731660 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731676 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731692 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731718 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731746 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731763 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731782 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731799 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731817 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731833 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731850 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731867 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731884 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731902 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731919 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731935 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731951 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731969 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.731987 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732004 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732023 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732049 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732068 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732086 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732102 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732121 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732138 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732156 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732174 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732191 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732207 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732223 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732241 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732259 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732277 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732293 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732309 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732411 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732432 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732448 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732466 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732483 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732500 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732517 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732533 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732549 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732571 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732590 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732610 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732628 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732646 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732666 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732685 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732707 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732726 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732744 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732770 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732786 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732825 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732843 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732862 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732879 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732896 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732912 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732929 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732946 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732971 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.732987 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733004 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733020 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733038 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733054 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733071 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733087 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733104 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733122 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733139 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733156 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733173 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733189 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733206 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733222 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733240 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733258 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733276 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733292 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733310 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733348 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733366 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733382 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733399 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733417 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733434 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733451 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733469 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733494 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733510 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733533 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733557 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733573 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733590 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733606 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733623 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733638 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733656 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733673 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733697 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733715 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733739 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733756 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733779 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733796 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733820 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733837 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733853 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733872 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733890 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733907 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733924 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733941 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733959 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733976 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.733993 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.734010 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.734027 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.734045 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.734061 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.734078 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737282 4805 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737466 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737577 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737663 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737753 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737832 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.737991 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738055 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738088 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738192 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738215 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738313 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738400 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738429 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738457 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738476 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738504 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738572 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738592 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738620 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738733 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738761 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738810 4805 reconstruct.go:97] "Volume reconstruction finished" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.738824 4805 reconciler.go:26] "Reconciler: start to sync state" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.755386 4805 manager.go:324] Recovery completed Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.772146 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.774766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.774831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.774854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.776132 4805 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.776161 4805 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.776190 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.779264 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.782464 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.782674 4805 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.783175 4805 kubelet.go:2335] "Starting kubelet main sync loop" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.783520 4805 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 00:22:54 crc kubenswrapper[4805]: W0217 00:22:54.785356 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.785465 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.802742 4805 policy_none.go:49] "None policy: Start" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.804571 4805 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.804774 4805 state_mem.go:35] "Initializing new in-memory state store" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.814281 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.867588 4805 manager.go:334] "Starting Device Plugin manager" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.867722 4805 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.867752 4805 server.go:79] "Starting device plugin registration server" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.868501 4805 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.868533 4805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.868919 4805 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.869070 4805 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.869090 4805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.884363 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.884459 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.885860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.885922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.885942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.886136 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.886309 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.886379 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.887429 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887682 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887978 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.887926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888773 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888832 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.888899 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.889284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.889466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.889494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.890469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.890492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.890502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.890848 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.891192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.891273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.891298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.891375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.891499 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.896485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.896537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.896556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.897910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.897964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.897983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.898254 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.898321 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.899801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.899871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.899896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.914829 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941798 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941817 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.941981 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942045 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942137 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942254 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.942364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.969034 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.970462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.970517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.970529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:54 crc kubenswrapper[4805]: I0217 00:22:54.970626 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:22:54 crc kubenswrapper[4805]: E0217 00:22:54.971356 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.043859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.043930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.043965 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.043997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044212 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044623 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044803 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044851 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.044832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.045008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.045067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.172426 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.174300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.174380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.174398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.174432 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:22:55 crc kubenswrapper[4805]: E0217 00:22:55.174982 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.242017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.269380 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.283939 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.295914 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-29d4416152bf8749788ec84804763faf24f1b2a6286a7585c582d214dfe6e14f WatchSource:0}: Error finding container 29d4416152bf8749788ec84804763faf24f1b2a6286a7585c582d214dfe6e14f: Status 404 returned error can't find the container with id 29d4416152bf8749788ec84804763faf24f1b2a6286a7585c582d214dfe6e14f Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.301141 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.312848 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.313984 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9b8bb412796bb5c9de79082837c978ae6bca3dbfedf79cd54b3bd1b92d6384f6 WatchSource:0}: Error finding container 9b8bb412796bb5c9de79082837c978ae6bca3dbfedf79cd54b3bd1b92d6384f6: Status 404 returned error can't find the container with id 9b8bb412796bb5c9de79082837c978ae6bca3dbfedf79cd54b3bd1b92d6384f6 Feb 17 00:22:55 crc kubenswrapper[4805]: E0217 00:22:55.315453 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.319108 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ecda1b62c1ca68d2e060ec4275ebaf5c34746898e80d173577cbcead823fd350 WatchSource:0}: Error finding container ecda1b62c1ca68d2e060ec4275ebaf5c34746898e80d173577cbcead823fd350: Status 404 returned error can't find the container with id ecda1b62c1ca68d2e060ec4275ebaf5c34746898e80d173577cbcead823fd350 Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.323157 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6b9f754bfcf4f5070d3e81bb287c51176d9647b3ce91c666b64e4e0b691252f7 WatchSource:0}: Error finding container 6b9f754bfcf4f5070d3e81bb287c51176d9647b3ce91c666b64e4e0b691252f7: Status 404 returned error can't find the container with id 6b9f754bfcf4f5070d3e81bb287c51176d9647b3ce91c666b64e4e0b691252f7 Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.339817 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-088f071158545704bd4da276178e7c7064cadb5793eb0103c001165ca8c4b4c8 WatchSource:0}: Error finding container 088f071158545704bd4da276178e7c7064cadb5793eb0103c001165ca8c4b4c8: Status 404 returned error can't find the container with id 088f071158545704bd4da276178e7c7064cadb5793eb0103c001165ca8c4b4c8 Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.576153 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.577593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.577641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.577656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.577684 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:22:55 crc kubenswrapper[4805]: E0217 00:22:55.578176 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 17 00:22:55 crc kubenswrapper[4805]: W0217 00:22:55.638913 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:55 crc kubenswrapper[4805]: E0217 00:22:55.639047 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.705773 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.713917 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:43:29.044649675 +0000 UTC Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.788105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"088f071158545704bd4da276178e7c7064cadb5793eb0103c001165ca8c4b4c8"} Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.790476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b9f754bfcf4f5070d3e81bb287c51176d9647b3ce91c666b64e4e0b691252f7"} Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.791652 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ecda1b62c1ca68d2e060ec4275ebaf5c34746898e80d173577cbcead823fd350"} Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.793431 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9b8bb412796bb5c9de79082837c978ae6bca3dbfedf79cd54b3bd1b92d6384f6"} Feb 17 00:22:55 crc kubenswrapper[4805]: I0217 00:22:55.795700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"29d4416152bf8749788ec84804763faf24f1b2a6286a7585c582d214dfe6e14f"} Feb 17 00:22:56 crc kubenswrapper[4805]: W0217 00:22:56.109357 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.109848 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.116030 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Feb 17 00:22:56 crc kubenswrapper[4805]: W0217 00:22:56.135064 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.135143 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:56 crc kubenswrapper[4805]: W0217 00:22:56.332373 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.332459 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.378347 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.380169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.380208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.380219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.380244 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.380691 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.691090 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 00:22:56 crc kubenswrapper[4805]: E0217 00:22:56.692562 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.705303 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.714180 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:59:54.85964584 +0000 UTC Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.802867 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.802923 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.802938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.802961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.802979 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.805566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.805616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.805654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.805699 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c" exitCode=0 Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.806014 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.806017 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.807176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.807260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.807286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.808020 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c0a4e5b91051fca15d4fecb263b05823ec7f67e6fd4b81ef94ee0b3c0a47d079" exitCode=0 Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.808096 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c0a4e5b91051fca15d4fecb263b05823ec7f67e6fd4b81ef94ee0b3c0a47d079"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.808283 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.809566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.809600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.809619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.810785 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.812421 4805 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e28f12de6d0915f50c9b19c463074ea519583d136ef25f677a051425d8412692" exitCode=0 Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.812641 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.812658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e28f12de6d0915f50c9b19c463074ea519583d136ef25f677a051425d8412692"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.817934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.818100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.818130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.819942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.819999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.820016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.821430 4805 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150" exitCode=0 Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.821488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150"} Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.821588 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.823952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.824121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:56 crc kubenswrapper[4805]: I0217 00:22:56.824291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.656185 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.705413 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.714449 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:36:32.51109817 +0000 UTC Feb 17 00:22:57 crc kubenswrapper[4805]: E0217 00:22:57.717218 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Feb 17 00:22:57 crc kubenswrapper[4805]: W0217 00:22:57.745246 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:57 crc kubenswrapper[4805]: E0217 00:22:57.745338 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.828621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.828664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.828674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.828682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.836183 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ed846613f4227f6ddc8cec8e7108c2cd6651adbaefbc139b5583ed95c0f3c25" exitCode=0 Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.836284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ed846613f4227f6ddc8cec8e7108c2cd6651adbaefbc139b5583ed95c0f3c25"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.836349 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.837015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.837046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.837060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.840137 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"021afbddaf18bd2ef07ced69f95c2719a1423e4259c35edb63da641d3d3177b3"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.842369 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.843774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.843835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.843848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.847555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.847601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.847613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872"} Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.847603 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.847624 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.848912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.981227 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.983038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.983073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.983085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:57 crc kubenswrapper[4805]: I0217 00:22:57.983108 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:22:57 crc kubenswrapper[4805]: E0217 00:22:57.983563 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Feb 17 00:22:58 crc kubenswrapper[4805]: W0217 00:22:58.241881 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:58 crc kubenswrapper[4805]: E0217 00:22:58.242001 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:58 crc kubenswrapper[4805]: W0217 00:22:58.367873 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Feb 17 00:22:58 crc kubenswrapper[4805]: E0217 00:22:58.367974 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.715334 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:33:27.462014202 +0000 UTC Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.751962 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.852079 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.854604 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426" exitCode=255 Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.854693 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426"} Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.854714 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.856028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.856098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.856116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.856951 4805 scope.go:117] "RemoveContainer" containerID="a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861600 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f15e9f9bbac40d106480cdd718fb3ba66857f85ffabf354606e6bdfd9d07fd94" exitCode=0 Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f15e9f9bbac40d106480cdd718fb3ba66857f85ffabf354606e6bdfd9d07fd94"} Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861770 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861811 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861845 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861888 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.861852 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.863547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.864430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:58 crc kubenswrapper[4805]: I0217 00:22:58.920218 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.715840 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:56:08.533355135 +0000 UTC Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.867171 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.869650 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9"} Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.869684 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.870991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.871045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.871063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99ea42d2e2401d0c1e1355746ea38a91052518495f62507c64a5204de23f035f"} Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c067377c70371a58e1bf519acfc7790f6c107e82d334077708672dc406331126"} Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874857 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"793e1a54f123cdd6a443c39c386b959a335bddd983fe9c56d4cc3cbe28a06b0c"} Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874866 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc9595f702e35756afce1cb4cf026b9cff0a6c053ddb62adbf472cc519b3bcdb"} Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.874902 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.876468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.942426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:22:59 crc kubenswrapper[4805]: I0217 00:22:59.998021 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.427365 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.716075 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:21:38.165032339 +0000 UTC Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.886045 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8344e47c69903d4534cc2b32c3a53eada57375188ec6b289c05fdba81f01b427"} Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.886205 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.886221 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.886266 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:00 crc kubenswrapper[4805]: I0217 00:23:00.888620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.004918 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.013756 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.025425 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.184163 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.185765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.185802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.185814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.185839 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.717217 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:06:32.605995544 +0000 UTC Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.752960 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.753056 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.888983 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.889126 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.889141 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.893913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:01 crc kubenswrapper[4805]: I0217 00:23:01.894624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.718182 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:29:33.725330101 +0000 UTC Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.892139 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.893705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.893767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.893786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.898871 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.899063 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.900285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.900388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.900415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:02 crc kubenswrapper[4805]: I0217 00:23:02.947445 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 17 00:23:03 crc kubenswrapper[4805]: I0217 00:23:03.718838 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:23:52.165015746 +0000 UTC Feb 17 00:23:03 crc kubenswrapper[4805]: I0217 00:23:03.894633 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:03 crc kubenswrapper[4805]: I0217 00:23:03.896221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:03 crc kubenswrapper[4805]: I0217 00:23:03.896276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:03 crc kubenswrapper[4805]: I0217 00:23:03.896294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:04 crc kubenswrapper[4805]: I0217 00:23:04.719034 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:30:53.382682691 +0000 UTC Feb 17 00:23:04 crc kubenswrapper[4805]: E0217 00:23:04.888026 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:23:05 crc kubenswrapper[4805]: I0217 00:23:05.719719 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:45:46.005138516 +0000 UTC Feb 17 00:23:06 crc kubenswrapper[4805]: I0217 00:23:06.720282 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:48:22.3396752 +0000 UTC Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.663220 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.663399 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.664841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.664900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.664917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:07 crc kubenswrapper[4805]: I0217 00:23:07.721385 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:34:04.232563134 +0000 UTC Feb 17 00:23:08 crc kubenswrapper[4805]: I0217 00:23:08.706521 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 00:23:08 crc kubenswrapper[4805]: I0217 00:23:08.721913 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:32:45.897145052 +0000 UTC Feb 17 00:23:09 crc kubenswrapper[4805]: W0217 00:23:09.104786 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.105379 4805 trace.go:236] Trace[559584880]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:22:59.103) (total time: 10001ms): Feb 17 00:23:09 crc kubenswrapper[4805]: Trace[559584880]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:23:09.104) Feb 17 00:23:09 crc kubenswrapper[4805]: Trace[559584880]: [10.001907488s] [10.001907488s] END Feb 17 00:23:09 crc kubenswrapper[4805]: E0217 00:23:09.105425 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.206420 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.206521 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.221521 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.221598 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.722228 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:54:33.28475857 +0000 UTC Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.943476 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 00:23:09 crc kubenswrapper[4805]: I0217 00:23:09.943562 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 00:23:10 crc kubenswrapper[4805]: I0217 00:23:10.005403 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]log ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]etcd ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-filter ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-apiextensions-informers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-apiextensions-controllers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/crd-informer-synced ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-system-namespaces-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 17 00:23:10 crc kubenswrapper[4805]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/bootstrap-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/start-kube-aggregator-informers ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-registration-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-discovery-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]autoregister-completion ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-openapi-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 17 00:23:10 crc kubenswrapper[4805]: livez check failed Feb 17 00:23:10 crc kubenswrapper[4805]: I0217 00:23:10.008796 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:23:10 crc kubenswrapper[4805]: I0217 00:23:10.723141 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:09:59.565734048 +0000 UTC Feb 17 00:23:11 crc kubenswrapper[4805]: I0217 00:23:11.723688 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:20:42.105326484 +0000 UTC Feb 17 00:23:11 crc kubenswrapper[4805]: I0217 00:23:11.753247 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 00:23:11 crc kubenswrapper[4805]: I0217 00:23:11.753364 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.725240 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:41:29.733367202 +0000 UTC Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.938416 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.938717 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.940418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.940593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.940667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:12 crc kubenswrapper[4805]: I0217 00:23:12.960574 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.726084 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:48:53.46381796 +0000 UTC Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.750126 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.920821 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.925986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.926022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:13 crc kubenswrapper[4805]: I0217 00:23:13.926030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:14 crc kubenswrapper[4805]: E0217 00:23:14.206581 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.209583 4805 trace.go:236] Trace[1071053718]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:23:03.248) (total time: 10960ms): Feb 17 00:23:14 crc kubenswrapper[4805]: Trace[1071053718]: ---"Objects listed" error: 10960ms (00:23:14.209) Feb 17 00:23:14 crc kubenswrapper[4805]: Trace[1071053718]: [10.960813363s] [10.960813363s] END Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.209619 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.210724 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.211675 4805 trace.go:236] Trace[1602193953]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:23:02.635) (total time: 11576ms): Feb 17 00:23:14 crc kubenswrapper[4805]: Trace[1602193953]: ---"Objects listed" error: 11576ms (00:23:14.211) Feb 17 00:23:14 crc kubenswrapper[4805]: Trace[1602193953]: [11.576281885s] [11.576281885s] END Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.211700 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.211720 4805 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 00:23:14 crc kubenswrapper[4805]: E0217 00:23:14.211910 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.226652 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.267222 4805 csr.go:261] certificate signing request csr-g8cvf is approved, waiting to be issued Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.282609 4805 csr.go:257] certificate signing request csr-g8cvf is issued Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.535947 4805 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 00:23:14 crc kubenswrapper[4805]: W0217 00:23:14.536213 4805 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 00:23:14 crc kubenswrapper[4805]: W0217 00:23:14.536246 4805 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 00:23:14 crc kubenswrapper[4805]: W0217 00:23:14.536272 4805 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 00:23:14 crc kubenswrapper[4805]: E0217 00:23:14.536348 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.106:54684->38.102.83.106:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0d8d33f963d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:22:55.325165117 +0000 UTC m=+1.340974555,LastTimestamp:2026-02-17 00:22:55.325165117 +0000 UTC m=+1.340974555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:23:14 crc kubenswrapper[4805]: W0217 00:23:14.536620 4805 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.690986 4805 apiserver.go:52] "Watching apiserver" Feb 17 00:23:14 crc kubenswrapper[4805]: I0217 00:23:14.728643 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:20:11.229501795 +0000 UTC Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.167269 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.167628 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168290 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168420 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.168448 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168469 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.168497 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168563 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.168693 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.168757 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.174002 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.174254 4805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.174744 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.174919 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.175099 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.177581 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.177617 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.177594 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.177798 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.180725 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.186890 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.193998 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-m6rzz"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.194543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.197068 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.198536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.199424 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.199864 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.200135 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.223024 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47738->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.223087 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47738->192.168.126.11:17697: read: connection reset by peer" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.230270 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.250372 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.263289 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267137 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267194 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267209 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267246 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267267 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267309 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267340 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267357 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267385 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267399 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267508 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267525 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267539 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267568 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267599 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267644 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267727 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267743 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267771 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267789 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267871 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.267998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268022 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268068 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268091 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268106 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268123 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268295 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268312 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268349 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268369 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268391 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268423 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268443 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268476 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268496 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268511 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268540 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268558 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268577 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268594 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268610 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268719 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268753 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268786 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268851 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268868 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268908 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268939 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268986 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269133 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269149 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269183 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269198 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269213 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269301 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269351 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269366 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269505 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269523 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269539 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269559 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269590 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269621 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269705 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269721 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269737 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269752 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269767 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269851 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269868 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269882 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269928 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269944 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269968 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269984 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270015 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270030 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270047 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270077 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270132 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270148 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270164 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270181 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270197 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270223 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270240 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270259 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270307 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270510 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270546 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270563 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270579 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270613 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270628 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270662 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270697 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270729 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270831 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270848 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270864 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270879 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270895 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270927 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270959 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271032 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271154 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271346 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271357 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271367 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271377 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271386 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271396 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.272096 4805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268623 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268713 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.268875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269372 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269534 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.269915 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.270240 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.271569 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.272888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.273174 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.273307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.273445 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.273456 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.273759 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274456 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274786 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.274996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.275052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.275470 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.275536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.275794 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.275860 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276028 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276061 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276966 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.276985 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.277448 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.277565 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.283482 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.283418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.277662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.278005 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.278677 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.278900 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.279969 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.282955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.284872 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.285059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.285391 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.285623 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.286344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.286290 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.286699 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 00:18:14 +0000 UTC, rotation deadline is 2026-12-12 22:16:46.855517116 +0000 UTC Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.286764 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7173h53m31.568757146s for next certificate rotation Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.287159 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.287547 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.287775 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.287642 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.288162 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.288215 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:15.788200281 +0000 UTC m=+21.804009679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288639 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288696 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.288895 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.288920 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.289134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.289340 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.289528 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.289803 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.290164 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:15.790141127 +0000 UTC m=+21.805950585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.294744 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.295012 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.295231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.295619 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.295821 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.295981 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.296034 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.296239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.296269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.296431 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.296659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.297222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.297578 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.297768 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.300647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.301362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.301651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.301797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.301940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.302452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.302874 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.302867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.303548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.303747 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.304353 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.304397 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.287437 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.305245 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.305426 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.305670 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.305826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.307027 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.307363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.307710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.308417 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.308640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.308878 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:15.808851141 +0000 UTC m=+21.824660539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.301723 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.310219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.310318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.310373 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.311593 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312069 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312317 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312734 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.312918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313102 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313879 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.314110 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.314383 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.314413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.314604 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.315176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.315902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.317834 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.318068 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.313832 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.320554 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.320682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.320720 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.320265 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.320945 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.321096 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.321209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.321609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.321954 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322343 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322468 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322539 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.322560 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.322582 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.322594 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.322724 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:15.822679416 +0000 UTC m=+21.838488864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.322986 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.323185 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.323236 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.323617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.323685 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.323967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.324050 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.327314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.328832 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.329256 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.329298 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.329389 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:15.829356546 +0000 UTC m=+21.845165944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.329649 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.329784 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.329886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.330056 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.330290 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.330508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.330960 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.331473 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.333730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.333802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.333851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334522 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334720 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334763 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334845 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.334890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.335080 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.335218 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.335288 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336397 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336589 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336571 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.336868 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.337142 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.337316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.337413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.337590 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.338422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.343442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.343574 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.343880 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.343993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.344063 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.346750 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.348491 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.349443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.349640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.349651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.351161 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.351619 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.352801 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.359202 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.359439 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.368494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc29\" (UniqueName: \"kubernetes.io/projected/56d5f74f-1f28-476b-9308-e6a93af909eb-kube-api-access-nbc29\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56d5f74f-1f28-476b-9308-e6a93af909eb-host\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/56d5f74f-1f28-476b-9308-e6a93af909eb-serviceca\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372679 4805 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372690 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372699 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372708 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372716 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372724 4805 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372732 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372740 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372748 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372757 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372764 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372773 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372781 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372790 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372799 4805 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372807 4805 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372816 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372824 4805 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372833 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372840 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372848 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372857 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372865 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372872 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372880 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372888 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372896 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372904 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372913 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372921 4805 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372930 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.372963 4805 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373016 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373028 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373039 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373047 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373057 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373065 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373074 4805 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373083 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373092 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373100 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373108 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373117 4805 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373126 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373135 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373143 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373151 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373159 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373168 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373176 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373185 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373194 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373202 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373210 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373218 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373227 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373235 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373243 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373252 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373260 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373268 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373276 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373285 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373294 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373301 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373310 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373332 4805 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373344 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373355 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373366 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373377 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373389 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373400 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373411 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373421 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373431 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373441 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373452 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373463 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373474 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373488 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373500 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.373995 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374012 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374023 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374036 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374048 4805 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374063 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374074 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374087 4805 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374099 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374109 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374121 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374134 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374145 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374157 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374168 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374180 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374193 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374203 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374211 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374219 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374228 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374239 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374250 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374261 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374272 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374281 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374291 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374302 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374313 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374342 4805 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374355 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374366 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374376 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374385 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374396 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374408 4805 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374456 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374469 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374727 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.374480 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375406 4805 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375422 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375437 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375450 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375464 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375478 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375491 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375505 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375519 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375533 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375546 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375558 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375570 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375582 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375596 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375607 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375619 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375632 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375644 4805 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375656 4805 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375669 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375681 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375692 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375703 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375715 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375726 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375738 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375752 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375763 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375775 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375786 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375799 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375814 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375857 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375870 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375882 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375894 4805 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375906 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375917 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375929 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375941 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375952 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375964 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375975 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.375987 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376000 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376011 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376022 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376034 4805 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376075 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376087 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376099 4805 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376111 4805 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376152 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376166 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376179 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376192 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376231 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376244 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376256 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376268 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376280 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.376315 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.377155 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.377169 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.478282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/56d5f74f-1f28-476b-9308-e6a93af909eb-serviceca\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.478411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc29\" (UniqueName: \"kubernetes.io/projected/56d5f74f-1f28-476b-9308-e6a93af909eb-kube-api-access-nbc29\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.478446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56d5f74f-1f28-476b-9308-e6a93af909eb-host\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.478525 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.478592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56d5f74f-1f28-476b-9308-e6a93af909eb-host\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.480176 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/56d5f74f-1f28-476b-9308-e6a93af909eb-serviceca\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.494221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.515480 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc29\" (UniqueName: \"kubernetes.io/projected/56d5f74f-1f28-476b-9308-e6a93af909eb-kube-api-access-nbc29\") pod \"node-ca-m6rzz\" (UID: \"56d5f74f-1f28-476b-9308-e6a93af909eb\") " pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.523951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.531165 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.538017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-m6rzz" Feb 17 00:23:15 crc kubenswrapper[4805]: W0217 00:23:15.554586 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a97150f72a7c8e6691d9e97f24296840959f1b80eb45932e58b0a3a87d8ef36a WatchSource:0}: Error finding container a97150f72a7c8e6691d9e97f24296840959f1b80eb45932e58b0a3a87d8ef36a: Status 404 returned error can't find the container with id a97150f72a7c8e6691d9e97f24296840959f1b80eb45932e58b0a3a87d8ef36a Feb 17 00:23:15 crc kubenswrapper[4805]: W0217 00:23:15.557525 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-27498b023542553ad495b827511be49481ca163722d9cfd033f816437920dc9d WatchSource:0}: Error finding container 27498b023542553ad495b827511be49481ca163722d9cfd033f816437920dc9d: Status 404 returned error can't find the container with id 27498b023542553ad495b827511be49481ca163722d9cfd033f816437920dc9d Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.599245 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lk6fw"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.599544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.600585 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5lvnd"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.601097 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ckkzk"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.601317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.601484 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.602056 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.602167 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605268 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605648 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605685 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605419 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605301 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-86xnz"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605597 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605883 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.605979 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.606037 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.606133 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.606240 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.606251 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.614802 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.615212 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.616037 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.617312 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.630195 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.642524 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.657444 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.665520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.678378 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681001 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-kubelet\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-hostroot\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-system-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-multus-certs\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-proxy-tls\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681262 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-k8s-cni-cncf-io\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-cnibin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681393 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-etc-kubernetes\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-daemon-config\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-os-release\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-rootfs\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.681944 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-system-cni-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-mcd-auth-proxy-config\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-conf-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682138 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxtj\" (UniqueName: \"kubernetes.io/projected/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-kube-api-access-wzxtj\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-os-release\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npqgk\" (UniqueName: \"kubernetes.io/projected/d03ce26a-37aa-4bc4-8057-f1f9c158868b-kube-api-access-npqgk\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682215 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-netns\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-multus\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-cni-binary-copy\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682367 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-bin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cnibin\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-socket-dir-parent\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpp5\" (UniqueName: \"kubernetes.io/projected/5da6b304-e28f-4666-817f-06bcc107e3fe-kube-api-access-sxpp5\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-binary-copy\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dee9dbb9-55c3-4c05-b86a-e889213c20b1-hosts-file\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.682526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fg8n\" (UniqueName: \"kubernetes.io/projected/dee9dbb9-55c3-4c05-b86a-e889213c20b1-kube-api-access-9fg8n\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.691992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.710066 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.720382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.729296 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:58:33.19296399 +0000 UTC Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.729481 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.737424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.743939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.761822 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.771649 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-mcd-auth-proxy-config\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783351 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-conf-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxtj\" (UniqueName: \"kubernetes.io/projected/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-kube-api-access-wzxtj\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-os-release\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783396 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-conf-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npqgk\" (UniqueName: \"kubernetes.io/projected/d03ce26a-37aa-4bc4-8057-f1f9c158868b-kube-api-access-npqgk\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-cni-binary-copy\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-netns\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-multus\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-netns\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-os-release\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-multus\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-bin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cnibin\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cnibin\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dee9dbb9-55c3-4c05-b86a-e889213c20b1-hosts-file\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fg8n\" (UniqueName: \"kubernetes.io/projected/dee9dbb9-55c3-4c05-b86a-e889213c20b1-kube-api-access-9fg8n\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/dee9dbb9-55c3-4c05-b86a-e889213c20b1-hosts-file\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-socket-dir-parent\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxpp5\" (UniqueName: \"kubernetes.io/projected/5da6b304-e28f-4666-817f-06bcc107e3fe-kube-api-access-sxpp5\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-binary-copy\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-system-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-kubelet\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-hostroot\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-k8s-cni-cncf-io\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-multus-certs\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-system-cni-dir\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.783789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-cni-bin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-socket-dir-parent\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-hostroot\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-k8s-cni-cncf-io\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-mcd-auth-proxy-config\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-run-multus-certs\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-proxy-tls\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784098 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-host-var-lib-kubelet\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784411 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-cnibin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-etc-kubernetes\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784448 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-daemon-config\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-os-release\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-rootfs\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-system-cni-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784536 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-cni-binary-copy\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-cnibin\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-system-cni-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-os-release\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-rootfs\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.784715 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5da6b304-e28f-4666-817f-06bcc107e3fe-etc-kubernetes\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.785138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.785144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5da6b304-e28f-4666-817f-06bcc107e3fe-multus-daemon-config\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.785216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d03ce26a-37aa-4bc4-8057-f1f9c158868b-cni-binary-copy\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.785774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d03ce26a-37aa-4bc4-8057-f1f9c158868b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.790670 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-proxy-tls\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.800451 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.801927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fg8n\" (UniqueName: \"kubernetes.io/projected/dee9dbb9-55c3-4c05-b86a-e889213c20b1-kube-api-access-9fg8n\") pod \"node-resolver-86xnz\" (UID: \"dee9dbb9-55c3-4c05-b86a-e889213c20b1\") " pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.811836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npqgk\" (UniqueName: \"kubernetes.io/projected/d03ce26a-37aa-4bc4-8057-f1f9c158868b-kube-api-access-npqgk\") pod \"multus-additional-cni-plugins-5lvnd\" (UID: \"d03ce26a-37aa-4bc4-8057-f1f9c158868b\") " pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.812093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxtj\" (UniqueName: \"kubernetes.io/projected/2531e0b8-5ad4-4dd3-86b9-bd6dec526041-kube-api-access-wzxtj\") pod \"machine-config-daemon-ckkzk\" (UID: \"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\") " pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.813047 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxpp5\" (UniqueName: \"kubernetes.io/projected/5da6b304-e28f-4666-817f-06bcc107e3fe-kube-api-access-sxpp5\") pod \"multus-lk6fw\" (UID: \"5da6b304-e28f-4666-817f-06bcc107e3fe\") " pod="openshift-multus/multus-lk6fw" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.813242 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.831267 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.833705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-86xnz" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.849230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: W0217 00:23:15.855747 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee9dbb9_55c3_4c05_b86a_e889213c20b1.slice/crio-c89a6d40870260de276c74b544b16c93a3e90cf55aebd0dc7eadeeabbcd0c419 WatchSource:0}: Error finding container c89a6d40870260de276c74b544b16c93a3e90cf55aebd0dc7eadeeabbcd0c419: Status 404 returned error can't find the container with id c89a6d40870260de276c74b544b16c93a3e90cf55aebd0dc7eadeeabbcd0c419 Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.870760 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.881557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.885803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.885912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886119 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:16.886100782 +0000 UTC m=+22.901910180 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886200 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886238 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:16.886227825 +0000 UTC m=+22.902037223 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.886706 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.886765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.886806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886854 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886911 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886925 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886936 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.886967 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:16.886956756 +0000 UTC m=+22.902766154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.887260 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.887303 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:16.887293516 +0000 UTC m=+22.903102914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.888100 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.888123 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: E0217 00:23:15.888205 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:16.888179731 +0000 UTC m=+22.903989199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.953535 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tbr6r"] Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.954274 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.957086 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.957460 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.957606 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.958961 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.959005 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.958961 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.958985 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.973820 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.984313 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987420 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987459 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987646 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987726 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987745 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987846 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987868 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:15 crc kubenswrapper[4805]: I0217 00:23:15.987882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgww\" (UniqueName: \"kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:15.999610 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.011956 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.016785 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lk6fw" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.031667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: W0217 00:23:16.035066 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da6b304_e28f_4666_817f_06bcc107e3fe.slice/crio-5f6b719203ce8a78e6a6372996d2971a65904c04bee0d4d1c5e232c01af007c9 WatchSource:0}: Error finding container 5f6b719203ce8a78e6a6372996d2971a65904c04bee0d4d1c5e232c01af007c9: Status 404 returned error can't find the container with id 5f6b719203ce8a78e6a6372996d2971a65904c04bee0d4d1c5e232c01af007c9 Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.042105 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.053168 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.063033 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.070813 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.078496 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.086111 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088448 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088695 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088735 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgww\" (UniqueName: \"kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.088965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089022 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089046 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089079 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.089358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.092780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.096707 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.102260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.105332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgww\" (UniqueName: \"kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww\") pod \"ovnkube-node-tbr6r\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.105454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.109653 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" Feb 17 00:23:16 crc kubenswrapper[4805]: W0217 00:23:16.114127 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2531e0b8_5ad4_4dd3_86b9_bd6dec526041.slice/crio-09247121b0b510e86626e6b0b3d6b8a19c7d15761888667a8e2d997cbba76f4f WatchSource:0}: Error finding container 09247121b0b510e86626e6b0b3d6b8a19c7d15761888667a8e2d997cbba76f4f: Status 404 returned error can't find the container with id 09247121b0b510e86626e6b0b3d6b8a19c7d15761888667a8e2d997cbba76f4f Feb 17 00:23:16 crc kubenswrapper[4805]: W0217 00:23:16.130654 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd03ce26a_37aa_4bc4_8057_f1f9c158868b.slice/crio-4a5ccf2165c8cb3a7dfa701dadf36a07a2f46655d52cbf29bd861577dd6736d1 WatchSource:0}: Error finding container 4a5ccf2165c8cb3a7dfa701dadf36a07a2f46655d52cbf29bd861577dd6736d1: Status 404 returned error can't find the container with id 4a5ccf2165c8cb3a7dfa701dadf36a07a2f46655d52cbf29bd861577dd6736d1 Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.172576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerStarted","Data":"4a5ccf2165c8cb3a7dfa701dadf36a07a2f46655d52cbf29bd861577dd6736d1"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.173824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-86xnz" event={"ID":"dee9dbb9-55c3-4c05-b86a-e889213c20b1","Type":"ContainerStarted","Data":"43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.173866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-86xnz" event={"ID":"dee9dbb9-55c3-4c05-b86a-e889213c20b1","Type":"ContainerStarted","Data":"c89a6d40870260de276c74b544b16c93a3e90cf55aebd0dc7eadeeabbcd0c419"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.174465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"27498b023542553ad495b827511be49481ca163722d9cfd033f816437920dc9d"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.175530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"09247121b0b510e86626e6b0b3d6b8a19c7d15761888667a8e2d997cbba76f4f"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.176930 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m6rzz" event={"ID":"56d5f74f-1f28-476b-9308-e6a93af909eb","Type":"ContainerStarted","Data":"de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.176959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-m6rzz" event={"ID":"56d5f74f-1f28-476b-9308-e6a93af909eb","Type":"ContainerStarted","Data":"ea020f5779252b6d610f0bf02d6a0d7516a4b91efa0bb5988e047497164e2559"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.178509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerStarted","Data":"5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.178555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerStarted","Data":"5f6b719203ce8a78e6a6372996d2971a65904c04bee0d4d1c5e232c01af007c9"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.180076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.180109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.180118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a97150f72a7c8e6691d9e97f24296840959f1b80eb45932e58b0a3a87d8ef36a"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.181406 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.181468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d0ac290e71c643b416afc7650a37ad8de504dcf4054bd0e3a25b0a6a1a5f9959"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.182526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.184711 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.185479 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.188805 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" exitCode=255 Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.188838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9"} Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.188867 4805 scope.go:117] "RemoveContainer" containerID="a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.195611 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.196058 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.196294 4805 scope.go:117] "RemoveContainer" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.196577 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.207098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.219988 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.230490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.240597 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.251126 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.261424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.270956 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.272182 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.282449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: W0217 00:23:16.301069 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d9024ef_7937_42b2_8fbc_60db984b9a2f.slice/crio-c09c210ac5d0e53e9f60e90bbffe5ae8b13f9b2dd1a44fe3519e6a52c3902fda WatchSource:0}: Error finding container c09c210ac5d0e53e9f60e90bbffe5ae8b13f9b2dd1a44fe3519e6a52c3902fda: Status 404 returned error can't find the container with id c09c210ac5d0e53e9f60e90bbffe5ae8b13f9b2dd1a44fe3519e6a52c3902fda Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.303749 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.315566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.338280 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.375239 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.417693 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.455695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.498644 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.544767 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.587638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.619418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.659525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.698857 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.729644 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:37:07.738866803 +0000 UTC Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.746702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.778005 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.784418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.784482 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.784601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.784694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.784722 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.784818 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.790955 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.791619 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.792286 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.792901 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.793510 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.793987 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.794598 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.795104 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.795689 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.796237 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.798215 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.798867 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.799720 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.800211 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.801076 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.801673 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.802589 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.803029 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.803571 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.804954 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.805893 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.806651 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.807198 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.808090 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.809651 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.810423 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.811719 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.812306 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.813613 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.814241 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.814828 4805 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.815393 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.817427 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.818046 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.819117 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.819245 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.820793 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.821495 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.822398 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.823005 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.823978 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.824497 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.825485 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.826141 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.827050 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.827526 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.828408 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.828882 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.830020 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.830534 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.831340 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.831793 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.832343 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.833243 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.833701 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.863494 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:16Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.896095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.896189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.896213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.896239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896340 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:18.896294452 +0000 UTC m=+24.912103850 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:16 crc kubenswrapper[4805]: I0217 00:23:16.896432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896815 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896823 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896846 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896858 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896868 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:18.896853328 +0000 UTC m=+24.912662726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896903 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:18.896890569 +0000 UTC m=+24.912700057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896904 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896941 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:18.896932851 +0000 UTC m=+24.912742369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896949 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896960 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896970 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:16 crc kubenswrapper[4805]: E0217 00:23:16.896994 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:18.896987692 +0000 UTC m=+24.912797090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.193112 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" exitCode=0 Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.193221 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.193257 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"c09c210ac5d0e53e9f60e90bbffe5ae8b13f9b2dd1a44fe3519e6a52c3902fda"} Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.195667 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.199242 4805 scope.go:117] "RemoveContainer" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" Feb 17 00:23:17 crc kubenswrapper[4805]: E0217 00:23:17.199499 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.200411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516"} Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.200462 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287"} Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.201901 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0" exitCode=0 Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.201946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0"} Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.222637 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a08a435a3ba53aa05ea24f00882570156fded642b2a0a1f5ddc0de3f968b2426\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:22:58Z\\\",\\\"message\\\":\\\"W0217 00:22:57.923291 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0217 00:22:57.923581 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771287777 cert, and key in /tmp/serving-cert-4207032538/serving-signer.crt, /tmp/serving-cert-4207032538/serving-signer.key\\\\nI0217 00:22:58.201245 1 observer_polling.go:159] Starting file observer\\\\nW0217 00:22:58.205546 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 00:22:58.205795 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:22:58.207603 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4207032538/tls.crt::/tmp/serving-cert-4207032538/tls.key\\\\\\\"\\\\nF0217 00:22:58.420450 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.243272 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.267533 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.290673 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.305805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.316962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.330989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.344180 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.358591 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.372650 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.390584 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.407377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.416639 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.426190 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.457917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.498718 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.544276 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.575464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.624218 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.658129 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.698872 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.729861 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:52:21.43672303 +0000 UTC Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.748447 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.782820 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.849221 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.862230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.900594 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:17Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:17 crc kubenswrapper[4805]: I0217 00:23:17.950994 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.206876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.208926 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812" exitCode=0 Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.208961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.215964 4805 scope.go:117] "RemoveContainer" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.216113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.222652 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.236319 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.256989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.267566 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.288028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.302437 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.315816 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.326885 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.338173 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.350625 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.364768 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.379215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.420824 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.462368 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.499762 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.541192 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.576171 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.618316 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.658446 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.701913 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.730205 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:16:21.632699441 +0000 UTC Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.739721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.757262 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.761721 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.781818 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.784044 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.784153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.784155 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.784291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.784419 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.784537 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.799192 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.842614 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.879839 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.915947 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.916044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.916079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.916141 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916187 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:22.91614724 +0000 UTC m=+28.931956668 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916222 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916270 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916298 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916311 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.916266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916273 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:22.916257223 +0000 UTC m=+28.932066701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916265 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916382 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:22.916368496 +0000 UTC m=+28.932177954 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916403 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:22.916393147 +0000 UTC m=+28.932202695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916436 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916458 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916477 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:18 crc kubenswrapper[4805]: E0217 00:23:18.916539 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:22.916525751 +0000 UTC m=+28.932335219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.921690 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:18 crc kubenswrapper[4805]: I0217 00:23:18.963694 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.001465 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:18Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.040681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.075880 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.116926 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.157002 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.196743 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.222107 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b" exitCode=0 Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.222775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b"} Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.251799 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: E0217 00:23:19.257696 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.299854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.342759 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.384848 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.418593 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.489449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.513509 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.550860 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.577694 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.620585 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.661961 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.699063 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.730801 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:22:40.572949898 +0000 UTC Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.748412 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.779870 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.829450 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.860577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.911271 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.949700 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:19 crc kubenswrapper[4805]: I0217 00:23:19.983111 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:19Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.023001 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.058922 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.100734 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.229033 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4" exitCode=0 Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.229128 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4"} Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.269542 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.292938 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.309486 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.329405 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.351578 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.369482 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.391245 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.430145 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.459586 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.498156 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.548648 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.581050 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.612891 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.616591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.616664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.616688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.616854 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.624468 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.672799 4805 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.673129 4805 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.679902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.680140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.680313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.680380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.680399 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.699113 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.703072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.703116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.703128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.703144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.703154 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.711807 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.722357 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.726974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.727003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.727014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.727028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.727040 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.731414 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:53:13.26270959 +0000 UTC Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.740637 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.745310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.745377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.745388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.745404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.745416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.760643 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.765293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.765343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.765355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.765372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.765385 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.777930 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:20Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.778093 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.780513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.780553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.780565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.780587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.780602 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.784581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.784635 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.784780 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.784912 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.785032 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:20 crc kubenswrapper[4805]: E0217 00:23:20.785132 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.883166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.883532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.883677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.883806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.883944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.986962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.987082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.987107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.987135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:20 crc kubenswrapper[4805]: I0217 00:23:20.987156 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:20Z","lastTransitionTime":"2026-02-17T00:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.090477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.090546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.090572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.090603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.090620 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.192730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.192759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.192768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.192780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.192789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.237132 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.240960 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128" exitCode=0 Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.240988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.270639 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.286397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.294946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.294973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.294981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.294993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.295001 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.304969 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.319561 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.336257 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.352281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.371658 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.393186 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.397534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.397571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.397581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.397601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.397617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.410643 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.423903 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.445101 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.476607 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.495486 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.500354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.500497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.500514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.500539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.500551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.516992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:21Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.531797 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.604920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.604964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.604979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.605000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.605016 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.707405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.707461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.707469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.707481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.707509 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.732621 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:37:23.311768749 +0000 UTC Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.810414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.810780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.810791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.810833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.810845 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.913554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.913597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.913659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.913677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:21 crc kubenswrapper[4805]: I0217 00:23:21.913689 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:21Z","lastTransitionTime":"2026-02-17T00:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.016728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.016773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.016790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.016814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.016832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.119909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.119994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.120015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.120042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.120060 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.222949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.223014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.223034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.223059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.223077 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.250217 4805 generic.go:334] "Generic (PLEG): container finished" podID="d03ce26a-37aa-4bc4-8057-f1f9c158868b" containerID="3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e" exitCode=0 Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.250298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerDied","Data":"3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.272544 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.295428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.313730 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.325618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.325682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.325699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.325723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.325741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.329985 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.343091 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.355807 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.372153 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.394805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.415807 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.428795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.428886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.428904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.428924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.428963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.436317 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.459421 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.479588 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.497667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.528475 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.533179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.533212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.533225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.533244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.533257 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.636048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.636107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.636125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.636149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.636165 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.733426 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:45:17.164432395 +0000 UTC Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.739395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.739446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.739457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.739477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.739488 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.784394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.784444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.784509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.784448 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.784710 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.784914 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.842607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.842652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.842663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.842682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.842697 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.855360 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.944938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.944996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.945020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.945050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.945075 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:22Z","lastTransitionTime":"2026-02-17T00:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.972036 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972230 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.972197101 +0000 UTC m=+36.988006549 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.972300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.972402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.972471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972534 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972704 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972745 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.972533 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972784 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972887 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972912 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972758 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.972684825 +0000 UTC m=+36.988494263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972941 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972956 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.972940272 +0000 UTC m=+36.988749710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972962 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.972978 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.972967913 +0000 UTC m=+36.988777341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:22 crc kubenswrapper[4805]: E0217 00:23:22.973020 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.973002534 +0000 UTC m=+36.988811972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:22 crc kubenswrapper[4805]: I0217 00:23:22.990968 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.048538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.048577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.048590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.048608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.048619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.151657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.151715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.151742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.151773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.151796 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.253721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.254348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.254365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.254390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.254405 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.260479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" event={"ID":"d03ce26a-37aa-4bc4-8057-f1f9c158868b","Type":"ContainerStarted","Data":"91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.267984 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.268804 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.268869 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.286721 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.304778 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.313777 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.314126 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.327213 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.341533 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.356696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.356758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.356782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.356812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.356834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.360403 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.379862 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.396875 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.437426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.458766 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.459847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.460064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.460245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.460526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.460713 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.483398 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.497908 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.511163 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.528939 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.542624 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.563704 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.563822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.564550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.564571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.564598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.564617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.589518 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.603790 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.617041 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.632575 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.646006 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.659610 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.667424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.667482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.667499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.667526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.667544 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.680252 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.696140 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.734343 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:00:17.679619009 +0000 UTC Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.736846 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.755793 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.769509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.769539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.769548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.769560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.769570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.777650 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.794166 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.806061 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:23Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.872291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.872575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.872646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.872710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.872773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.976580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.976616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.976624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.976639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:23 crc kubenswrapper[4805]: I0217 00:23:23.976648 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:23Z","lastTransitionTime":"2026-02-17T00:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.078780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.078831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.078851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.078875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.078892 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.185480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.185547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.185563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.185586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.185607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.272037 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.287773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.287817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.287836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.287857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.287901 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.390738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.390801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.390820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.390841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.390872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.494018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.494494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.494679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.494821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.494964 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.597512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.597623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.597646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.597675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.597699 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.700840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.701118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.701231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.701418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.701527 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.735289 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:04:42.029501119 +0000 UTC Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.784000 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.784028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.784179 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:24 crc kubenswrapper[4805]: E0217 00:23:24.784414 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:24 crc kubenswrapper[4805]: E0217 00:23:24.784527 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:24 crc kubenswrapper[4805]: E0217 00:23:24.784658 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.804692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.804748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.804767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.804792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.804810 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.805659 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.825520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.846193 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.868651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.889298 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.908120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.908164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.908180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.908201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.908219 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:24Z","lastTransitionTime":"2026-02-17T00:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.910902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.927072 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.955935 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:24 crc kubenswrapper[4805]: I0217 00:23:24.980505 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.005702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.011012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.011064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.011081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.011104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.011123 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.027809 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.058609 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.083709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.097784 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.113838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.113902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.113918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.113941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.113957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.216132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.216177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.216189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.216208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.216221 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.274818 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.319133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.319177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.319190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.319206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.319218 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.421535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.421580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.421592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.421609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.421621 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.545951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.546318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.546518 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.546671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.546834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.649585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.649642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.649659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.649686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.649705 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.735709 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:24:52.233989182 +0000 UTC Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.753426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.753479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.753491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.753509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.753521 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.857158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.857216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.857234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.857257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.857288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.960543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.960598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.960615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.960637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:25 crc kubenswrapper[4805]: I0217 00:23:25.960655 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:25Z","lastTransitionTime":"2026-02-17T00:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.064430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.064487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.064505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.064529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.064548 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.168059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.168128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.168146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.168170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.168187 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.271523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.271574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.271591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.271616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.271637 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.280692 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/0.log" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.285818 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1" exitCode=1 Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.285874 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.287060 4805 scope.go:117] "RemoveContainer" containerID="54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.310168 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.339576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.357782 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.372254 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.375249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.375301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.375320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.375372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.375392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.393539 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.407711 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.440604 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.458967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.479146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.479452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.479611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.479764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.479905 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.483154 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.501008 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.519081 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.537508 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.556290 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.574598 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:26Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.582530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.582576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.582591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.582610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.582626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.685261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.685311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.685363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.685395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.685418 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.736161 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:42:22.832231506 +0000 UTC Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.784404 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.784467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.784467 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:26 crc kubenswrapper[4805]: E0217 00:23:26.784639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:26 crc kubenswrapper[4805]: E0217 00:23:26.785088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:26 crc kubenswrapper[4805]: E0217 00:23:26.785256 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.794022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.794084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.794106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.794134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.794157 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.896980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.897041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.897061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.897084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.897101 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:26Z","lastTransitionTime":"2026-02-17T00:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.999618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:26 crc kubenswrapper[4805]: I0217 00:23:26.999900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:26.999997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.000082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.000168 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.051841 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.102954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.103234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.103358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.103450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.103543 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.209245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.209285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.209297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.209315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.209345 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.297463 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/0.log" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.301898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.302123 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.312469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.312674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.312794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.312924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.313049 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.319484 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.334756 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.346237 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.378913 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.399978 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.415667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.415894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.415959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.415972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.415990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.416004 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.438512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.464377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.481309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.497041 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.514292 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.518717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.518788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.518807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.518833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.518854 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.530465 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.549185 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.569199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:27Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.621678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.621715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.621724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.621740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.621750 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.724298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.724570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.724683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.724793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.724902 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.736520 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:28:04.182536783 +0000 UTC Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.827820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.827864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.827880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.827900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.827916 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.930820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.930857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.930873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.930895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:27 crc kubenswrapper[4805]: I0217 00:23:27.930912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:27Z","lastTransitionTime":"2026-02-17T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.033398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.033447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.033465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.033485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.033499 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.136231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.136705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.136833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.136964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.137110 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.239942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.240195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.240279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.240396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.240503 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.308532 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/1.log" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.309660 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/0.log" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.313584 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506" exitCode=1 Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.313689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.313842 4805 scope.go:117] "RemoveContainer" containerID="54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.315032 4805 scope.go:117] "RemoveContainer" containerID="c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506" Feb 17 00:23:28 crc kubenswrapper[4805]: E0217 00:23:28.315474 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.335700 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.338271 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt"] Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.339217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.346619 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.349234 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.350266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.350319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.350371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.350403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.350434 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.365090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.383356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.398465 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.426169 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.431715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745dv\" (UniqueName: \"kubernetes.io/projected/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-kube-api-access-745dv\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.431836 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.431882 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.432464 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.446160 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.453681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.453726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.453737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.453752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.453762 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.463644 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.483440 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.502518 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.528855 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.533195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.533250 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.533319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.533421 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745dv\" (UniqueName: \"kubernetes.io/projected/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-kube-api-access-745dv\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.534908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.535634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.546182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.550863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.554295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745dv\" (UniqueName: \"kubernetes.io/projected/57d20f37-b784-4cc1-8f0d-fbfbe640f0e3-kube-api-access-745dv\") pod \"ovnkube-control-plane-749d76644c-jlmnt\" (UID: \"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.556453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.556555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.556615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.556678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.556742 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.560964 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.573437 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.586026 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.601478 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.620651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.638610 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.652591 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.659242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.659283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.659291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.659306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.659315 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.672141 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.674384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.687820 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.710095 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.726366 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.736700 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:23:36.117215069 +0000 UTC Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.744906 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.760476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.762511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.762555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.762569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.762589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.762601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.772916 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.784545 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.784513 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.784545 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.784565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:28 crc kubenswrapper[4805]: E0217 00:23:28.784697 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:28 crc kubenswrapper[4805]: E0217 00:23:28.785623 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:28 crc kubenswrapper[4805]: E0217 00:23:28.785766 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.800256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.810865 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.819761 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:28Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.864703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.864773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.864790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.864815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.864832 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.966462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.966619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.966681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.966762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:28 crc kubenswrapper[4805]: I0217 00:23:28.966834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:28Z","lastTransitionTime":"2026-02-17T00:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.070309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.070371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.070384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.070440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.070454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.173859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.173915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.174115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.174134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.174150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.276026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.276285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.276380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.276465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.276534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.326282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" event={"ID":"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3","Type":"ContainerStarted","Data":"cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.326343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" event={"ID":"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3","Type":"ContainerStarted","Data":"5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.326354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" event={"ID":"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3","Type":"ContainerStarted","Data":"6e33e651d308ca90c90fd38a1bb824d6dadfab247c96736cf6f2c8b9be9c0ed6"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.328442 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/1.log" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.343653 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.357726 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.372136 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.379199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.379315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.379414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.379501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.379571 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.385358 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.411270 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.432283 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.465624 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.482459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.482520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.482537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.482561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.482579 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.483540 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.497797 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.511193 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.524613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.535502 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.549551 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.569234 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.584302 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.585077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.585143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.585167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.585200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.585227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.688721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.688776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.688795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.688818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.688835 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.737364 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:41:36.103173046 +0000 UTC Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.792879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.792935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.792951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.792999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.793021 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.839915 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jnv59"] Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.840742 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:29 crc kubenswrapper[4805]: E0217 00:23:29.840859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.863295 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.880977 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.894905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.896153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.896188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.896197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.896213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.896389 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.905420 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.931989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.949696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cccv\" (UniqueName: \"kubernetes.io/projected/86b8a270-8cb3-4266-9fe0-3cfd027a9174-kube-api-access-6cccv\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.949747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.950113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.971357 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.989529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:29Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.999218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.999251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.999261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.999278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:29 crc kubenswrapper[4805]: I0217 00:23:29.999289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:29Z","lastTransitionTime":"2026-02-17T00:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.003093 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.014704 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.025228 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.037602 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.050487 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.050689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cccv\" (UniqueName: \"kubernetes.io/projected/86b8a270-8cb3-4266-9fe0-3cfd027a9174-kube-api-access-6cccv\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.050803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.050958 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.051023 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:30.551007343 +0000 UTC m=+36.566816751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.067451 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.068521 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cccv\" (UniqueName: \"kubernetes.io/projected/86b8a270-8cb3-4266-9fe0-3cfd027a9174-kube-api-access-6cccv\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.081632 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.098178 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:30Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.102626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.102675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.102700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.102729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.102748 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.205999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.206290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.206307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.206360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.206378 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.309475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.309524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.309544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.309572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.309595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.411936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.412000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.412022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.412050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.412070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.515319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.515422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.515439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.515462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.515480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.556209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.556438 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.556519 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:31.556497614 +0000 UTC m=+37.572307052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.619515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.619591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.619608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.619635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.619653 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.722619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.722672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.722692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.722714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.722731 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.738047 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:15:11.131064304 +0000 UTC Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.783681 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.783717 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.783701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.783859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.783973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:30 crc kubenswrapper[4805]: E0217 00:23:30.784083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.825628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.825689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.825705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.825729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.825746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.929033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.929094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.929112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.929135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.929152 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.998582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.998639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.998656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.998679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:30 crc kubenswrapper[4805]: I0217 00:23:30.998696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:30Z","lastTransitionTime":"2026-02-17T00:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.019736 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:31Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.025120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.025179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.025202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.025229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.025249 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.047077 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:31Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.052014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.052079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.052104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.052130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.052150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.062863 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.063019 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063048 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.063010184 +0000 UTC m=+53.078819632 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.063100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063134 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.063190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063206 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.063184429 +0000 UTC m=+53.078993857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.063256 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063420 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063441 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063449 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063476 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063484 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063496 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063506 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063499 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.063475007 +0000 UTC m=+53.079284445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063578 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.06355628 +0000 UTC m=+53.079365718 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.063600 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.063588891 +0000 UTC m=+53.079398319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.072634 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:31Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.078696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.078759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.078782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.078809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.078831 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.100372 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:31Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.105619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.105683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.105702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.105727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.105744 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.128043 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:31Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.128199 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.130541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.130603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.130621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.130646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.130668 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.233616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.233694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.233717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.233748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.233772 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.337069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.337141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.337157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.337181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.337254 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.440618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.440685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.440701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.440726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.440744 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.543603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.543678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.543704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.543732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.543752 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.570581 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.570800 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.570942 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:33.570912796 +0000 UTC m=+39.586722234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.646784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.646855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.646877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.646907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.646934 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.739035 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:43:29.235723912 +0000 UTC Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.749068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.749135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.749153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.749177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.749195 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.783699 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:31 crc kubenswrapper[4805]: E0217 00:23:31.783884 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.784505 4805 scope.go:117] "RemoveContainer" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.852471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.852537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.852556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.852582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.852600 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.955362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.955417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.955429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.955448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:31 crc kubenswrapper[4805]: I0217 00:23:31.955459 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:31Z","lastTransitionTime":"2026-02-17T00:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.057645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.057716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.057729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.057747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.057759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.160559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.160601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.160616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.160634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.160645 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.262990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.263036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.263053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.263075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.263091 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.344795 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.346644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.348135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.366362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.366427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.366450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.366480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.366502 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.368048 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.385552 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.399745 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.412620 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.429666 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.444681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.458912 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.469615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.469677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.469694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.469718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.469735 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.479090 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.497169 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.514418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.529414 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.546010 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.559859 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.571533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.571576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.571587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.571605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.571619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.585467 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.601617 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.619982 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.673964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.674014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.674032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.674053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.674068 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.739871 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:14:30.18587986 +0000 UTC Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.776246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.776456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.776519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.776580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.776644 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.784663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.784798 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.784670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:32 crc kubenswrapper[4805]: E0217 00:23:32.784979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:32 crc kubenswrapper[4805]: E0217 00:23:32.784807 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:32 crc kubenswrapper[4805]: E0217 00:23:32.785353 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.882560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.882636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.882653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.882682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.882696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.985941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.985997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.986016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.986041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:32 crc kubenswrapper[4805]: I0217 00:23:32.986059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:32Z","lastTransitionTime":"2026-02-17T00:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.088856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.088907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.088924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.088947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.088963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.194386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.194447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.194465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.194492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.194510 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.297605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.297901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.298024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.298153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.298272 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.401609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.401676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.401699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.401728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.401747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.504082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.504124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.504136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.504153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.504164 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.595883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:33 crc kubenswrapper[4805]: E0217 00:23:33.596167 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:33 crc kubenswrapper[4805]: E0217 00:23:33.596627 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:37.596593399 +0000 UTC m=+43.612402837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.607191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.607285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.607316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.607374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.607393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.710285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.710349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.710365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.710387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.710401 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.740072 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:48:44.904027712 +0000 UTC Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.784543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:33 crc kubenswrapper[4805]: E0217 00:23:33.784775 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.813497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.813555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.813579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.813605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.813629 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.916245 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.916486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.916564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.916674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:33 crc kubenswrapper[4805]: I0217 00:23:33.916751 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:33Z","lastTransitionTime":"2026-02-17T00:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.020609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.020684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.020711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.020742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.020769 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.123955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.124016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.124034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.124058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.124077 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.227416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.227470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.227528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.227559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.227581 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.330484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.330543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.330568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.330597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.330620 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.433529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.433580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.433598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.433624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.433642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.536875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.536932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.536949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.536972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.536989 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.639396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.639456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.639473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.639495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.639512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.740863 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:22:53.332898394 +0000 UTC Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.743507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.743581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.743678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.743766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.743793 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.784397 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.784462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.784885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:34 crc kubenswrapper[4805]: E0217 00:23:34.784884 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:34 crc kubenswrapper[4805]: E0217 00:23:34.784998 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:34 crc kubenswrapper[4805]: E0217 00:23:34.785202 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.798903 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.822790 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.841074 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.847823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.847986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.848067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.848146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.848220 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.861084 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.877722 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.892205 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.923189 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54cb643f236e65fc21a6d54dea5dbfbc11feebe0d240dea3fa14f64180df51a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:25Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:23:25.747559 6110 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:23:25.747605 6110 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:23:25.747622 6110 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:23:25.747639 6110 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:23:25.747644 6110 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:23:25.747647 6110 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:23:25.747662 6110 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 00:23:25.747674 6110 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:23:25.747689 6110 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:23:25.747694 6110 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:23:25.747714 6110 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:23:25.747715 6110 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:23:25.747733 6110 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:23:25.747755 6110 factory.go:656] Stopping watch factory\\\\nI0217 00:23:25.747770 6110 ovnkube.go:599] Stopped ovnkube\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.942954 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.951674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.951716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.951733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.951756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.951775 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:34Z","lastTransitionTime":"2026-02-17T00:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.967417 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:34 crc kubenswrapper[4805]: I0217 00:23:34.989884 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:34Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.009377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.026677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.042134 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.055142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.055382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.055527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.055688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.055849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.061732 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.077835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.097381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:35Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.159367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.159437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.159454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.159478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.159497 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.262003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.262082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.262101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.262647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.262714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.365717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.366049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.366277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.366794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.367013 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.470967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.471070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.471091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.471117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.471135 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.573530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.573598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.573621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.573691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.573788 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.677196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.677259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.677280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.677386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.677407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.741882 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:07:20.684772517 +0000 UTC Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.780901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.780946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.780962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.780987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.781004 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.784243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:35 crc kubenswrapper[4805]: E0217 00:23:35.784479 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.884210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.884281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.884298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.884356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.884376 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.988258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.988355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.988374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.988397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:35 crc kubenswrapper[4805]: I0217 00:23:35.988414 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:35Z","lastTransitionTime":"2026-02-17T00:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.091321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.091404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.091420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.091441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.091457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.194259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.194313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.194424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.194483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.194499 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.297685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.297733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.297750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.297772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.297788 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.401041 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.401088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.401104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.401127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.401143 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.504397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.504456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.504474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.504496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.504512 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.607878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.607923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.607940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.607962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.607981 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.710940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.711007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.711030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.711059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.711081 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.742443 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:44:33.176075 +0000 UTC Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.784218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.784294 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.784424 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:36 crc kubenswrapper[4805]: E0217 00:23:36.784446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:36 crc kubenswrapper[4805]: E0217 00:23:36.784515 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:36 crc kubenswrapper[4805]: E0217 00:23:36.784566 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.814654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.814736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.814773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.814802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.814823 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.917899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.917967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.917985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.918011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:36 crc kubenswrapper[4805]: I0217 00:23:36.918043 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:36Z","lastTransitionTime":"2026-02-17T00:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.021382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.021457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.021475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.021497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.021515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.124973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.125032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.125050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.125074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.125095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.227150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.227198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.227217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.227240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.227258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.329805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.329861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.329872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.329893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.329906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.431951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.431985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.431999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.432020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.432031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.535442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.535505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.535525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.535549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.535565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.638445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.638502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.638520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.638544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.638563 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.639977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:37 crc kubenswrapper[4805]: E0217 00:23:37.640178 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:37 crc kubenswrapper[4805]: E0217 00:23:37.640256 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:45.640233217 +0000 UTC m=+51.656042645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.741418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.741472 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.741489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.741516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.741533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.742840 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:39:02.868825439 +0000 UTC Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.784537 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:37 crc kubenswrapper[4805]: E0217 00:23:37.784729 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.844480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.844539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.844557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.844580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.844598 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.947144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.947222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.947246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.947270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:37 crc kubenswrapper[4805]: I0217 00:23:37.947287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:37Z","lastTransitionTime":"2026-02-17T00:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.050909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.050976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.050994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.051018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.051036 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.153596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.153663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.153684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.153709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.153727 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.257284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.257392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.257415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.257442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.257462 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.360988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.361043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.361058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.361079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.361094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.462796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.462836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.462848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.462865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.462875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.565733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.565837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.565865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.565879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.565887 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.668676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.668732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.668750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.668776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.668799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.742971 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:07:16.496589068 +0000 UTC Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.772061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.772123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.772148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.772177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.772198 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.784597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.784648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.784598 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:38 crc kubenswrapper[4805]: E0217 00:23:38.784738 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:38 crc kubenswrapper[4805]: E0217 00:23:38.784876 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:38 crc kubenswrapper[4805]: E0217 00:23:38.785446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.875300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.875388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.875406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.875427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.875444 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.978985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.979046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.979065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.979090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:38 crc kubenswrapper[4805]: I0217 00:23:38.979111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:38Z","lastTransitionTime":"2026-02-17T00:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.082039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.082118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.082146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.082178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.082201 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.185205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.185262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.185278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.185300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.185321 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.289658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.289768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.289779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.289801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.289812 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.399442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.399495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.399513 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.399537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.399553 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.501857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.501903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.501918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.501937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.501948 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.604014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.604049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.604057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.604069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.604078 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.706897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.706942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.706955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.706974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.706985 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.743398 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:33:10.12114052 +0000 UTC Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.784226 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:39 crc kubenswrapper[4805]: E0217 00:23:39.784492 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.809858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.809939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.809960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.810000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.810022 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.912647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.912698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.912714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.912735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:39 crc kubenswrapper[4805]: I0217 00:23:39.912751 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:39Z","lastTransitionTime":"2026-02-17T00:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.015837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.015891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.015913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.015939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.015962 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.119193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.119239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.119256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.119274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.119285 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.222448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.222503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.222520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.222542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.222559 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.326554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.326651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.326670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.326696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.326718 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.429275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.429396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.429419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.429451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.429473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.532893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.532958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.532978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.533009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.533098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.635971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.636038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.636057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.636086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.636137 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.740214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.740272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.740289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.740313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.740416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.743867 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:20:58.91795249 +0000 UTC Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.783784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.783923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:40 crc kubenswrapper[4805]: E0217 00:23:40.784151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.784230 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:40 crc kubenswrapper[4805]: E0217 00:23:40.784395 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:40 crc kubenswrapper[4805]: E0217 00:23:40.784508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.844148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.844215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.844235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.844266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.844288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.948244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.948316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.948370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.948406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:40 crc kubenswrapper[4805]: I0217 00:23:40.948430 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:40Z","lastTransitionTime":"2026-02-17T00:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.052412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.052494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.052516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.052550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.052573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.155840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.155955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.155972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.155998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.156014 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.259110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.259175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.259193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.259219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.259239 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.293903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.293953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.293966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.293984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.293998 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.311173 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.316168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.316285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.316314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.316389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.316420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.335167 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.341203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.341293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.341313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.341397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.341417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.362056 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.366784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.366875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.366903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.366936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.366957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.388404 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.393588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.393646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.393672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.393704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.393729 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.414690 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.414914 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.417677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.417755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.417780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.417812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.417837 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.521793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.521878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.521905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.521939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.521961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.624889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.625004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.625023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.625053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.625074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.729541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.729598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.729613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.729635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.729650 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.744526 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:52:54.185920936 +0000 UTC Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.784619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:41 crc kubenswrapper[4805]: E0217 00:23:41.784889 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.791221 4805 scope.go:117] "RemoveContainer" containerID="c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.816639 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.835855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.835903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.835913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.835932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.835947 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.840408 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.862956 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.880535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.899023 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.915739 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.930256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.939163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.939215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.939234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.939258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.939275 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:41Z","lastTransitionTime":"2026-02-17T00:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.945489 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.961809 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.976871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:41 crc kubenswrapper[4805]: I0217 00:23:41.993967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:41Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.008656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.034421 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.041585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.041623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.041632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.041646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.041656 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.053056 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.073666 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.089993 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.144957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.145007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.145018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.145037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.145049 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.247216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.247252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.247262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.247278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.247288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.349019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.349049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.349072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.349085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.349093 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.386741 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/1.log" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.390235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.390454 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.411035 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.430611 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.444661 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.451521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.451561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.451577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.451597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.451611 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.459447 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.470451 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.480461 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.491767 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.504797 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.516237 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.529966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.553956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.554002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.554012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.554038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.554048 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.556821 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.570675 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.583148 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.595300 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.607297 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.616367 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:42Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.656669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.656719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.656733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.656750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.656762 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.739037 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.745231 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:39:44.495721448 +0000 UTC Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.759239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.759303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.759312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.759338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.759347 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.784699 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:42 crc kubenswrapper[4805]: E0217 00:23:42.784862 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.784992 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:42 crc kubenswrapper[4805]: E0217 00:23:42.785210 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.785228 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:42 crc kubenswrapper[4805]: E0217 00:23:42.785660 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.862121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.862168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.862178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.862195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.862207 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.964884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.964948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.964965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.964989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:42 crc kubenswrapper[4805]: I0217 00:23:42.965007 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:42Z","lastTransitionTime":"2026-02-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.067941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.068003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.068024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.068046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.068063 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.171027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.171100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.171124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.171152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.171171 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.274502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.274547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.274563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.274585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.274604 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.377220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.377276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.377292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.377316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.377362 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.397236 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/2.log" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.398732 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/1.log" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.403078 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" exitCode=1 Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.403140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.403198 4805 scope.go:117] "RemoveContainer" containerID="c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.405973 4805 scope.go:117] "RemoveContainer" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" Feb 17 00:23:43 crc kubenswrapper[4805]: E0217 00:23:43.406302 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.426284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.448208 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.478530 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c24c854e6ef64b7345c8e6fbc912dc30ddcba713cf6dc1ddea57b0c0d3866506\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:27Z\\\",\\\"message\\\":\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.246\\\\\\\", Port:9443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 00:23:27.378602 6245 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 00:23:27.378691 6245 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication/oauth-openshift]} name:Service_openshift-authentication/oauth-openshift_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.222:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c0c2f725-e461-454e-a88c-c8350d62e1ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 00:23:27.378751 6245 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.481896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.481968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.481986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.482011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.482028 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.500026 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.525429 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.546625 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.567445 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.583992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.584830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.584887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.584905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.584929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.584950 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.605483 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.627197 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.644180 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.664290 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.683961 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.691850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.691951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.691971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.691997 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.692055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.703006 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.718732 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.736738 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:43Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.745885 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:21:10.093430712 +0000 UTC Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.784378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:43 crc kubenswrapper[4805]: E0217 00:23:43.784606 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.795578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.795628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.795701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.795729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.795746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.899421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.899503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.899527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.899561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:43 crc kubenswrapper[4805]: I0217 00:23:43.899582 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:43Z","lastTransitionTime":"2026-02-17T00:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.003166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.003231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.003248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.003273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.003289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.105873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.105963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.105981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.106006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.106023 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.209279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.209411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.209438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.209473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.209497 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.313067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.313124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.313143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.313167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.313187 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.411283 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/2.log" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.415768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.415816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.415832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.415854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.415874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.417357 4805 scope.go:117] "RemoveContainer" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" Feb 17 00:23:44 crc kubenswrapper[4805]: E0217 00:23:44.417689 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.433356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.452802 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.474450 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.491259 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.505871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.520048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.520101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.520152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.520176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.520195 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.521130 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.540771 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.561504 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.596421 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.617986 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.623280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.623317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.623345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.623365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.623377 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.636428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.659722 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.680256 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.696474 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.717497 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.725714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.725750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.725759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.725771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.725779 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.733473 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.746589 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:40:49.775800587 +0000 UTC Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.783969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.784091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:44 crc kubenswrapper[4805]: E0217 00:23:44.784188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.784274 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:44 crc kubenswrapper[4805]: E0217 00:23:44.784545 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:44 crc kubenswrapper[4805]: E0217 00:23:44.784735 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.805504 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.828159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.835600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.835680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.835707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.835774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.835801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.858088 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.876235 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.907162 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.928246 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.939249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.939321 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.939378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.939402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.939419 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:44Z","lastTransitionTime":"2026-02-17T00:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.951149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.972297 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:44 crc kubenswrapper[4805]: I0217 00:23:44.988359 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.000925 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:44Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.015143 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.030296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.042452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.042515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.042536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.042561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.042584 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.043798 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.081194 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.104182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.123966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:45Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.145070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.145140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.145152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.145175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.145201 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.248074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.248130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.248149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.248174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.248193 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.351914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.351977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.351992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.352012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.352033 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.455266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.455342 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.455364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.455387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.455405 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.558400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.558451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.558463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.558480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.558492 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.648410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:45 crc kubenswrapper[4805]: E0217 00:23:45.648629 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:45 crc kubenswrapper[4805]: E0217 00:23:45.648776 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:01.648745701 +0000 UTC m=+67.664555129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.661986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.662052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.662068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.662092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.662111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.747429 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:12:38.559929287 +0000 UTC Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.765720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.765778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.765796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.765830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.765849 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.784395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:45 crc kubenswrapper[4805]: E0217 00:23:45.784623 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.869875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.869962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.869993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.870029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.870062 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.977172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.977271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.977299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.977391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:45 crc kubenswrapper[4805]: I0217 00:23:45.977420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:45Z","lastTransitionTime":"2026-02-17T00:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.081097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.081181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.081204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.081238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.081263 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.184452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.184519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.184536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.184561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.184576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.288117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.288273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.288296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.288379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.288406 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.391556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.391606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.391655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.391678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.391694 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.495021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.495119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.495147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.495185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.495207 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.598078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.598147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.598175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.598207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.598231 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.701204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.701524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.701544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.701569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.701587 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.748270 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:38:47.086557654 +0000 UTC Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.784131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.784147 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:46 crc kubenswrapper[4805]: E0217 00:23:46.784299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.784316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:46 crc kubenswrapper[4805]: E0217 00:23:46.784487 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:46 crc kubenswrapper[4805]: E0217 00:23:46.784658 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.805378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.805443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.805460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.805490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.805507 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.909176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.909242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.909263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.909288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:46 crc kubenswrapper[4805]: I0217 00:23:46.909306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:46Z","lastTransitionTime":"2026-02-17T00:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.012619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.012685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.012701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.012728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.012745 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.064210 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.064373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.064431 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064546 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:24:19.064501822 +0000 UTC m=+85.080311270 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064566 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064610 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064651 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:19.064628576 +0000 UTC m=+85.080438014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064654 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064681 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.064709 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064743 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:19.064726068 +0000 UTC m=+85.080535496 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.064798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064826 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064846 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064865 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064913 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:19.064899493 +0000 UTC m=+85.080708921 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.064958 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.065036 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:19.065015117 +0000 UTC m=+85.080824555 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.116293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.116387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.116406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.116430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.116448 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.219782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.219858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.219876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.219904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.219923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.323556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.323629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.323660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.323683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.323701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.425688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.425752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.425781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.425810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.425829 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.519799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.532807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.532853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.532869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.532890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.532908 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.536023 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.542214 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.563382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.589784 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.608028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.630028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.635512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.635568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.635586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.635609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.635626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.653186 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.680407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.698423 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.717650 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.739951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.740008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.740026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.740052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.740070 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.741072 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.748545 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:35:23.05257753 +0000 UTC Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.758407 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.780703 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.784233 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:47 crc kubenswrapper[4805]: E0217 00:23:47.784494 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.805310 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.828636 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.844439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.844537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.844555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.844781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.844806 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.848699 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.866812 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:47Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.948273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.948373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.948390 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.948414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:47 crc kubenswrapper[4805]: I0217 00:23:47.948432 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:47Z","lastTransitionTime":"2026-02-17T00:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.051776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.051838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.051856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.051927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.051944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.154836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.154914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.154958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.154990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.155012 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.258413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.258481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.258501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.258525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.258542 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.361648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.361737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.361756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.361780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.361799 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.464180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.464222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.464240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.464262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.464280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.567262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.567375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.567599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.567632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.567654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.670644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.670715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.670738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.670769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.670790 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.749583 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:22:50.817395473 +0000 UTC Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.774220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.774282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.774303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.774367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.774393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.783727 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.783768 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:48 crc kubenswrapper[4805]: E0217 00:23:48.783919 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.783972 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:48 crc kubenswrapper[4805]: E0217 00:23:48.784098 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:48 crc kubenswrapper[4805]: E0217 00:23:48.784296 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.877113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.877172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.877190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.877213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.877231 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.980303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.980406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.980432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.980459 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:48 crc kubenswrapper[4805]: I0217 00:23:48.980477 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:48Z","lastTransitionTime":"2026-02-17T00:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.084256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.084319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.084377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.084401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.084423 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.187687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.187736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.187756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.187784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.187811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.291443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.291502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.291529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.291561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.291583 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.395095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.395252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.395276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.395320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.395384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.501076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.501144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.501165 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.501193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.501212 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.604570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.604609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.604621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.604639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.604651 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.707558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.707622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.707644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.707670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.707691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.750066 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:56:23.354112736 +0000 UTC Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.783901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:49 crc kubenswrapper[4805]: E0217 00:23:49.784096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.811235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.811284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.811302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.811355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.811372 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.915467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.915538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.915557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.915581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.915599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:49Z","lastTransitionTime":"2026-02-17T00:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.951449 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.975737 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:49Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:49 crc kubenswrapper[4805]: I0217 00:23:49.998978 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:49Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.018192 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.019925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.019964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.019980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.020003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.020019 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.034409 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.051555 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.066817 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.084318 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.097967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.115290 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.123174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.123227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.123240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.123263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.123277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.134152 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.152096 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.172278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.191187 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.210025 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.224788 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.226493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.226532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.226549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.226571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.226587 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.256784 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.276157 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:50Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.329794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.329866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.329885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.329911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.329929 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.433273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.433376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.433401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.433430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.433454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.535852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.535904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.535919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.535942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.535956 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.639415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.639455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.639464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.639477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.639485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.742951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.743002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.743024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.743054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.743074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.751255 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:40:28.611807746 +0000 UTC Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.784650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.784705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:50 crc kubenswrapper[4805]: E0217 00:23:50.784834 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.784907 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:50 crc kubenswrapper[4805]: E0217 00:23:50.785078 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:50 crc kubenswrapper[4805]: E0217 00:23:50.785181 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.851648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.851724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.851744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.851769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.851792 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.954654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.954715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.954732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.954756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:50 crc kubenswrapper[4805]: I0217 00:23:50.954773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:50Z","lastTransitionTime":"2026-02-17T00:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.057414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.057477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.057493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.057517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.057534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.161060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.161109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.161125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.161148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.161164 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.263923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.263974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.263993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.264016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.264034 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.367370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.367430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.367447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.367470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.367489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.470289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.470374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.470393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.470417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.470475 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.541318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.541399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.541511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.541538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.541607 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.564621 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:51Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.570542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.570595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.570625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.570648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.570664 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.589503 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:51Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.594636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.594691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.594709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.594731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.594747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.613215 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:51Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.618129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.618177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.618195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.618222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.618245 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.638978 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:51Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.644671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.644738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.644760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.644788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.644809 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.666603 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:51Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.666959 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.670143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.670193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.670211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.670237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.670258 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.752116 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:18:02.554032243 +0000 UTC Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.773035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.773066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.773074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.773086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.773094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.784622 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:51 crc kubenswrapper[4805]: E0217 00:23:51.784813 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.876288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.876413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.876438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.876466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.876489 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.979857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.979917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.979941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.979970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:51 crc kubenswrapper[4805]: I0217 00:23:51.979991 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:51Z","lastTransitionTime":"2026-02-17T00:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.082884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.083015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.083034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.083058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.083106 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.186360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.186446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.186460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.186479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.186493 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.291456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.291546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.291570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.291604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.291631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.394669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.394726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.394745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.394773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.394792 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.498655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.498737 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.498754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.498780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.498866 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.601500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.601599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.601631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.601673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.601701 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.704989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.705073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.705093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.705126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.705147 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.752291 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:01:02.778485432 +0000 UTC Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.784186 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.784276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.784186 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:52 crc kubenswrapper[4805]: E0217 00:23:52.784432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:52 crc kubenswrapper[4805]: E0217 00:23:52.784568 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:52 crc kubenswrapper[4805]: E0217 00:23:52.784715 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.808050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.808110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.808129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.808152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.808170 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.911212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.911291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.911316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.911382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:52 crc kubenswrapper[4805]: I0217 00:23:52.911407 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:52Z","lastTransitionTime":"2026-02-17T00:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.014953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.015022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.015042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.015067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.015084 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.117743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.117818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.117836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.117862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.117879 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.220376 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.220431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.220449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.220476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.220493 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.323453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.323557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.323579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.323612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.323636 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.427386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.427446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.427469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.427490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.427508 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.531728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.531811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.531834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.531872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.531898 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.635077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.635134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.635149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.635168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.635181 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.739894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.739978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.739996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.740027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.740048 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.752661 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:27:29.018639937 +0000 UTC Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.784721 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:53 crc kubenswrapper[4805]: E0217 00:23:53.784957 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.844846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.844925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.844944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.844978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.844997 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.948133 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.948207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.948224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.948249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:53 crc kubenswrapper[4805]: I0217 00:23:53.948267 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:53Z","lastTransitionTime":"2026-02-17T00:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.051419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.051502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.051523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.051551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.051570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.155209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.155279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.155294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.155320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.155389 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.259128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.259221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.259250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.259279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.259305 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.363101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.363183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.363201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.363225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.363242 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.466722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.466782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.466799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.466824 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.466843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.570308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.570405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.570422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.570453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.570472 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.673578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.673966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.674128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.674276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.674470 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.753261 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:16:04.462158986 +0000 UTC Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.777897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.777962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.777980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.778005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.778023 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.784288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:54 crc kubenswrapper[4805]: E0217 00:23:54.784479 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.784731 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:54 crc kubenswrapper[4805]: E0217 00:23:54.784832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.785658 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:54 crc kubenswrapper[4805]: E0217 00:23:54.786860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.810430 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.834492 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.852309 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.866913 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.880631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.880688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.880705 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.880729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.880746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.889428 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.909098 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.931783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.948715 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.968005 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.982948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.983001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.983017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.983038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.983052 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:54Z","lastTransitionTime":"2026-02-17T00:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.985054 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:54 crc kubenswrapper[4805]: I0217 00:23:54.997667 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:54Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.026636 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.045598 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.061798 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.077991 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.085416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.085672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.085849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.086001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.086150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.095576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.114529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:23:55Z is after 2025-08-24T17:21:41Z" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.188732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.188852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.188882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.188913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.188937 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.292938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.293005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.293024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.293255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.293285 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.395857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.395892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.395903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.395920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.395931 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.498378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.498436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.498454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.498482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.498498 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.601310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.601392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.601411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.601433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.601450 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.704449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.704494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.704505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.704524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.704537 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.754352 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:36:54.035316673 +0000 UTC Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.783608 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:55 crc kubenswrapper[4805]: E0217 00:23:55.783759 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.807790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.807853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.807866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.807884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.807897 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.910960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.911047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.911074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.911107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:55 crc kubenswrapper[4805]: I0217 00:23:55.911126 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:55Z","lastTransitionTime":"2026-02-17T00:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.013729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.013780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.013798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.013825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.013842 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.117537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.117637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.117655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.117684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.117703 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.220272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.220357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.220375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.220396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.220414 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.323909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.324453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.324477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.324509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.324531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.428134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.428215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.428243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.428275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.428299 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.530986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.531046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.531062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.531090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.531109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.633201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.633262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.633279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.633303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.633319 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.736507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.736552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.736568 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.736591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.736606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.755143 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:02:52.441647893 +0000 UTC Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.784606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.784650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.784732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:56 crc kubenswrapper[4805]: E0217 00:23:56.784927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:56 crc kubenswrapper[4805]: E0217 00:23:56.785233 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:56 crc kubenswrapper[4805]: E0217 00:23:56.785911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.786455 4805 scope.go:117] "RemoveContainer" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" Feb 17 00:23:56 crc kubenswrapper[4805]: E0217 00:23:56.786819 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.840042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.840099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.840117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.840140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.840162 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.942579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.942652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.942677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.942708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:56 crc kubenswrapper[4805]: I0217 00:23:56.942730 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:56Z","lastTransitionTime":"2026-02-17T00:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.045873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.045939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.045957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.045984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.046004 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.149103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.149156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.149168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.149185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.149199 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.252048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.252109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.252130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.252155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.252173 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.356307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.356579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.356700 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.356800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.356885 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.459865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.460772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.460932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.461087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.461287 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.564621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.564679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.564695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.564717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.564743 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.667829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.667973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.667991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.668015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.668031 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.755621 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:25:42.112022145 +0000 UTC Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.771318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.771412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.771429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.771451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.771469 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.783982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:57 crc kubenswrapper[4805]: E0217 00:23:57.784153 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.873757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.873837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.873865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.873895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.873918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.976560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.976622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.976638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.976659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:57 crc kubenswrapper[4805]: I0217 00:23:57.976677 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:57Z","lastTransitionTime":"2026-02-17T00:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.079399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.079786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.079927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.080070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.080193 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.183611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.183960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.184112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.184257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.184429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.287857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.287961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.287991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.288016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.288033 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.390450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.390549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.390577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.390610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.390630 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.494140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.494220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.494244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.494269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.494290 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.598523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.598581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.598599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.598629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.598649 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.701921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.701981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.701998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.702023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.702041 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.756133 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:28:35.899380508 +0000 UTC Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.784804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.784813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:23:58 crc kubenswrapper[4805]: E0217 00:23:58.785026 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.785188 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:23:58 crc kubenswrapper[4805]: E0217 00:23:58.785355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:23:58 crc kubenswrapper[4805]: E0217 00:23:58.785559 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.804966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.804991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.805000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.805018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.805030 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.909392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.909466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.909486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.909516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:58 crc kubenswrapper[4805]: I0217 00:23:58.909538 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:58Z","lastTransitionTime":"2026-02-17T00:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.012877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.012931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.012948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.012971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.012987 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.116596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.116674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.116695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.116726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.116746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.219236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.219271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.219282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.219298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.219308 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.322711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.322767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.322804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.322823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.322836 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.425497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.425565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.425586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.425610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.425626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.528201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.528241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.528252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.528292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.528305 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.630563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.630612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.630623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.630641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.630652 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.733210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.733243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.733251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.733265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.733275 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.756446 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:37:12.477312253 +0000 UTC Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.784494 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:23:59 crc kubenswrapper[4805]: E0217 00:23:59.784648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.835485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.835584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.835602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.835625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.835642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.938116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.938184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.938200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.938224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:23:59 crc kubenswrapper[4805]: I0217 00:23:59.938241 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:23:59Z","lastTransitionTime":"2026-02-17T00:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.041194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.041253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.041273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.041297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.041316 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.143664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.143702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.143712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.143727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.143738 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.245480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.245540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.245557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.245582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.245601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.349903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.349966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.349998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.350015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.350026 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.452448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.452493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.452505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.452521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.452533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.555006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.555038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.555047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.555064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.555074 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.657726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.657779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.657796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.657820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.657838 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.756541 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:34:16.82945199 +0000 UTC Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.760524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.760577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.760591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.760609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.760617 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.784402 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.784406 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:00 crc kubenswrapper[4805]: E0217 00:24:00.784564 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:00 crc kubenswrapper[4805]: E0217 00:24:00.784648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.784415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:00 crc kubenswrapper[4805]: E0217 00:24:00.784741 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.862921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.862973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.862989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.863008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.863021 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.965875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.965930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.965943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.965967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:00 crc kubenswrapper[4805]: I0217 00:24:00.965981 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:00Z","lastTransitionTime":"2026-02-17T00:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.068683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.068783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.068850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.068884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.068906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.171906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.171939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.171949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.171966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.171977 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.274152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.274179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.274189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.274203 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.274216 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.377116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.377180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.377202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.377227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.377247 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.479856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.480159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.480299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.480483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.480635 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.583273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.583318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.583360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.583381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.583393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.674071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.674252 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.674364 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:33.674342188 +0000 UTC m=+99.690151586 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.685404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.685447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.685458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.685476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.685487 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.756794 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:12:51.931725899 +0000 UTC Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.783910 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.784072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.789771 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.789812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.789827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.789847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.789859 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.846760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.846804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.846816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.846832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.846843 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.859753 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:01Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.863003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.863111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.863179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.863248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.863313 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.878924 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:01Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.882801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.882842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.882854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.882870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.882881 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.893965 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:01Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.896488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.896520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.896531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.896545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.896555 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.906353 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:01Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.909674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.909722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.909739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.909761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.909780 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.925139 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:01Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:01 crc kubenswrapper[4805]: E0217 00:24:01.925365 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.926869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.926910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.926927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.926947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:01 crc kubenswrapper[4805]: I0217 00:24:01.926963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:01Z","lastTransitionTime":"2026-02-17T00:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.029565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.029597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.029605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.029618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.029628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.131641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.131704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.131715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.131752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.131766 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.234588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.234647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.234671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.234698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.234721 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.337413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.337480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.337497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.337521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.337537 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.440563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.440683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.440712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.440890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.440978 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.544138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.544182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.544213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.544227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.544237 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.646768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.646829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.646852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.646884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.646906 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.749604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.749642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.749653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.749669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.749682 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.756965 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:21:39.754886363 +0000 UTC Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.784038 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.784098 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.784041 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:02 crc kubenswrapper[4805]: E0217 00:24:02.784136 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:02 crc kubenswrapper[4805]: E0217 00:24:02.784438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:02 crc kubenswrapper[4805]: E0217 00:24:02.784509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.852392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.852449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.852465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.852486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.852503 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.954563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.954590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.954598 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.954610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:02 crc kubenswrapper[4805]: I0217 00:24:02.954620 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:02Z","lastTransitionTime":"2026-02-17T00:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.057274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.057361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.057386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.057411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.057429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.160514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.160567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.160576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.160590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.160601 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.263215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.263257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.263285 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.263303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.263317 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.365436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.365473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.365498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.365514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.365527 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.467635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.467675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.467683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.467698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.467707 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.488966 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/0.log" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.489051 4805 generic.go:334] "Generic (PLEG): container finished" podID="5da6b304-e28f-4666-817f-06bcc107e3fe" containerID="5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d" exitCode=1 Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.489097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerDied","Data":"5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.489713 4805 scope.go:117] "RemoveContainer" containerID="5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.500930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.514779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.530990 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.540312 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.549871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.560954 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.569391 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.569930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.570009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.570066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.570127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.570185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.578040 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.590028 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.603857 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.616165 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.626833 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.639363 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.649154 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.659573 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.670072 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.672243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.672356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.672413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.672502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.672563 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.691824 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:03Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.757079 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:07:48.99619737 +0000 UTC Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.775017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.775070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.775089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.775111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.775127 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.784200 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:03 crc kubenswrapper[4805]: E0217 00:24:03.784291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.877196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.877257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.877270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.877284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.877295 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.980241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.980313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.980373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.980396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:03 crc kubenswrapper[4805]: I0217 00:24:03.980416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:03Z","lastTransitionTime":"2026-02-17T00:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.082766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.082804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.082814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.082830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.082841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.186404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.186617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.186772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.186875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.186968 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.289074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.289111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.289124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.289140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.289151 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.391673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.391740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.391751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.391794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.391811 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.493663 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/0.log" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.493703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerStarted","Data":"dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.494264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.494276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.494283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.494292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.494311 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.507574 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.515424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.529287 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.538558 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.549080 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.560634 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.572410 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.589698 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.596788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.596855 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.596876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.596903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.596925 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.609231 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.622999 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.636201 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.644563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.656448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.668517 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.677891 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.689454 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.699680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.699718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.699730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.699746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.699757 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.700619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.757859 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:21:42.211260963 +0000 UTC Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.783887 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.783932 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.783968 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:04 crc kubenswrapper[4805]: E0217 00:24:04.784069 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:04 crc kubenswrapper[4805]: E0217 00:24:04.784143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:04 crc kubenswrapper[4805]: E0217 00:24:04.784285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.801862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.801884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.801892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.801904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.801913 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.805669 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.828075 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.846267 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.863821 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.881102 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.895580 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.904320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.904429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.904447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.904470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.904490 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:04Z","lastTransitionTime":"2026-02-17T00:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.910628 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.924837 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.941612 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.958182 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.974611 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:04 crc kubenswrapper[4805]: I0217 00:24:04.988176 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:04Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.002100 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:05Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.006606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.006743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.006835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.006932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.007003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.015651 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:05Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.028466 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:05Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.038367 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:05Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.056263 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:05Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.108850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.108878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.108886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.108899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.108907 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.211301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.211395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.211413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.211435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.211451 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.313630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.313690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.313706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.313729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.313746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.415683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.415754 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.415774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.415800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.415820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.518652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.518693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.518706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.518724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.518735 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.621095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.621145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.621158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.621175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.621187 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.723233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.723279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.723290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.723307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.723320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.758552 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:39:41.437810222 +0000 UTC Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.784453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:05 crc kubenswrapper[4805]: E0217 00:24:05.784580 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.825418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.825466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.825479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.825500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.825515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.928403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.928470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.928487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.928510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:05 crc kubenswrapper[4805]: I0217 00:24:05.928526 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:05Z","lastTransitionTime":"2026-02-17T00:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.030790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.030848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.030864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.030881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.030893 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.133617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.133683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.133706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.133734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.133757 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.236132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.236196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.236238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.236261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.236280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.338839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.338896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.338914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.338981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.339010 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.442441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.442508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.442520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.442537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.442552 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.544825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.544883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.544902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.544925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.544941 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.647221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.647279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.647298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.647320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.647370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.750731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.750793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.750811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.750835 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.750852 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.759019 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 01:46:26.311457012 +0000 UTC Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.783992 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.783994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.784192 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:06 crc kubenswrapper[4805]: E0217 00:24:06.784285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:06 crc kubenswrapper[4805]: E0217 00:24:06.784410 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:06 crc kubenswrapper[4805]: E0217 00:24:06.784591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.853816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.853877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.853896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.853920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.853935 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.956421 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.956475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.956491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.956512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:06 crc kubenswrapper[4805]: I0217 00:24:06.956528 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:06Z","lastTransitionTime":"2026-02-17T00:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.059191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.059267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.059279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.059295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.059307 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.162368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.162439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.162460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.162485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.162501 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.264945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.265027 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.265065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.265101 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.265143 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.368317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.368423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.368444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.368467 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.368485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.471189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.471232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.471243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.471260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.471271 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.573574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.573619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.573632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.573649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.573661 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.676119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.676162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.676172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.676212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.676225 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.760148 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:32:02.003557748 +0000 UTC Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.779211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.779278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.779300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.779357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.779379 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.783879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:07 crc kubenswrapper[4805]: E0217 00:24:07.784131 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.882344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.882424 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.882457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.882490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.882515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.986975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.987097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.987111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.987131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:07 crc kubenswrapper[4805]: I0217 00:24:07.987148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:07Z","lastTransitionTime":"2026-02-17T00:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.090635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.090681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.090703 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.090731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.090752 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.194861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.195002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.195024 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.195119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.195144 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.298383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.298474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.298493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.298527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.298546 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.401299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.401364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.401377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.401395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.401409 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.504896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.504946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.504962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.504987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.505007 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.608533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.608593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.608607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.608632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.608655 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.711922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.711969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.712014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.712032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.712044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.760916 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:02:15.045447054 +0000 UTC Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.784368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.784480 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.784401 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:08 crc kubenswrapper[4805]: E0217 00:24:08.784570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:08 crc kubenswrapper[4805]: E0217 00:24:08.784665 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:08 crc kubenswrapper[4805]: E0217 00:24:08.784900 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.814433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.814503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.814523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.814550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.814569 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.917208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.917276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.917298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.917356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:08 crc kubenswrapper[4805]: I0217 00:24:08.917382 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:08Z","lastTransitionTime":"2026-02-17T00:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.020233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.020310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.020370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.020401 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.020424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.124259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.124393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.124416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.124444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.124462 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.228079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.228142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.228170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.228193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.228211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.331445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.331510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.331527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.331550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.331569 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.434266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.434363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.434382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.434408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.434430 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.537743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.537804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.537821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.537848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.537865 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.641487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.641544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.641561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.641588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.641605 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.744381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.744431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.744449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.744471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.744550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.761776 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:12:48.164042111 +0000 UTC Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.784260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:09 crc kubenswrapper[4805]: E0217 00:24:09.784995 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.785452 4805 scope.go:117] "RemoveContainer" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.847807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.847868 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.847884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.847947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.847965 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.957184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.957298 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.957315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.957367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:09 crc kubenswrapper[4805]: I0217 00:24:09.957385 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:09Z","lastTransitionTime":"2026-02-17T00:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.060558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.060616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.060635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.060659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.060676 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.163487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.163532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.163548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.163567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.163583 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.267030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.267075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.267087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.267104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.267117 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.369447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.369493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.369504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.369521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.369532 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.472587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.472651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.472668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.472692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.472710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.517427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/2.log" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.521941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.522755 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.546042 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.570533 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.575640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.575694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.575714 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.575741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.575759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.587403 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.602004 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.617552 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.634999 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.650806 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.667972 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.678659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.678722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.678763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.678783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.678796 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.682949 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.700382 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.719272 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.731340 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.750638 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.762565 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:08:23.174325812 +0000 UTC Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.767457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.780079 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.780989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.781021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.781030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.781044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.781053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.784668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.784700 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.784748 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:10 crc kubenswrapper[4805]: E0217 00:24:10.784824 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:10 crc kubenswrapper[4805]: E0217 00:24:10.784890 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:10 crc kubenswrapper[4805]: E0217 00:24:10.784980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.793590 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.807662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:10Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.883921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.883955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.883965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.883983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.883994 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.986519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.986565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.986581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.986601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:10 crc kubenswrapper[4805]: I0217 00:24:10.986619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:10Z","lastTransitionTime":"2026-02-17T00:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.089723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.089767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.089783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.089804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.089821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.192359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.192397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.192405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.192418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.192429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.295475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.295547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.295582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.295613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.295638 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.399050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.399125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.399138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.399162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.399179 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.502512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.502571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.502588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.502610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.502628 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.536866 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/3.log" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.538783 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/2.log" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.543729 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" exitCode=1 Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.543829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.543900 4805 scope.go:117] "RemoveContainer" containerID="487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.546421 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:24:11 crc kubenswrapper[4805]: E0217 00:24:11.546814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.574488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.593833 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.605597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.605649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.605665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.605690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.605709 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.613780 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.630316 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.645055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.660769 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.676128 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.697022 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.708800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.708854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.708863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.708877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.708888 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.714998 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.729036 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.742737 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.759222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.762906 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:16:00.541157722 +0000 UTC Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.772486 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.784191 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:11 crc kubenswrapper[4805]: E0217 00:24:11.784358 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.788444 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.811116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.811173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.811191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.811214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.811233 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.817186 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://487e24d5ae8e1a62a6c5c65030975176697c65f39a5bc119c901637bbd0b3e92\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:23:42Z\\\",\\\"message\\\":\\\"elds:{GoSet:[]} vips:{GoMap:map[10.217.5.21:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {58a148b3-0a7b-4412-b447-f87788c4883f}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 00:23:42.654599 6471 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655383 6471 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655406 6471 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0217 00:23:42.655415 6471 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0217 00:23:42.655422 6471 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0217 00:23:42.655465 6471 factory.go:656] Stopping watch factory\\\\nI0217 00:23:42.655483 6471 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:23:42.655505 6471 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0217 00:23:42.655559 6471 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:10Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:24:10.736381 6861 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:24:10.736419 6861 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:24:10.736459 6861 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:24:10.736470 6861 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:24:10.736575 6861 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:24:10.736598 6861 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:24:10.736619 6861 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:24:10.736643 6861 factory.go:656] Stopping watch factory\\\\nI0217 00:24:10.736661 6861 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:24:10.736713 6861 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:24:10.736746 6861 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:24:10.736749 6861 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 00:24:10.736731 6861 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:24:10.736756 6861 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:24:10.736826 6861 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:24:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:24:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.835073 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.852957 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.915088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.915130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.915146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.915168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.915185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.954746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.954789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.954806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.954826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.954841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: E0217 00:24:11.975474 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.980207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.980261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.980279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.980302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:11 crc kubenswrapper[4805]: I0217 00:24:11.980319 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:11Z","lastTransitionTime":"2026-02-17T00:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:11 crc kubenswrapper[4805]: E0217 00:24:11.999166 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:11Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.003283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.003339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.003349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.003363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.003372 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.020805 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.025219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.025272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.025288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.025310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.025358 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.040567 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.045981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.046030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.046132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.046161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.046177 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.065745 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.065959 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.067857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.067907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.067923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.067944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.067960 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.173433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.173494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.173523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.173549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.173565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.277719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.277867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.277929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.277958 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.277976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.381398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.381445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.381462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.381485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.381504 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.484740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.484816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.484842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.484871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.484892 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.550476 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/3.log" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.555608 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.555891 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.575001 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.588619 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.588681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.588698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.588725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.588743 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.606124 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:10Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:24:10.736381 6861 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:24:10.736419 6861 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:24:10.736459 6861 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:24:10.736470 6861 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:24:10.736575 6861 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:24:10.736598 6861 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:24:10.736619 6861 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:24:10.736643 6861 factory.go:656] Stopping watch factory\\\\nI0217 00:24:10.736661 6861 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:24:10.736713 6861 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:24:10.736746 6861 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:24:10.736749 6861 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 00:24:10.736731 6861 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:24:10.736756 6861 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:24:10.736826 6861 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:24:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:24:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.625498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.644264 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.664199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.683935 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.691648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.691729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.692089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.692512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.692585 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.699150 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.716237 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.737896 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.750731 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.763958 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:59:43.074806244 +0000 UTC Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.764818 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.779217 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.784698 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.784717 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.784813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.784974 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.785131 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:12 crc kubenswrapper[4805]: E0217 00:24:12.785483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.793687 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.795581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.795656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.795679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.795697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.795714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.809825 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.821680 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.836150 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.849346 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:12Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.899168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.899230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.899246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.899270 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:12 crc kubenswrapper[4805]: I0217 00:24:12.899288 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:12Z","lastTransitionTime":"2026-02-17T00:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.002119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.002173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.002188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.002210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.002227 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.105503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.105566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.105624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.105648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.105667 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.208293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.208800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.209049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.209268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.209772 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.313352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.313751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.313934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.314150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.314390 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.417614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.417706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.417757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.417781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.417797 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.521744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.521814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.521831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.521854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.521872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.626626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.626686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.626709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.626739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.626762 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.729589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.729692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.729712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.729736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.729754 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.764191 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:22:52.605036244 +0000 UTC Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.783853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:13 crc kubenswrapper[4805]: E0217 00:24:13.784020 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.832504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.832561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.832582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.832609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.832631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.935293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.935373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.935393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.935415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:13 crc kubenswrapper[4805]: I0217 00:24:13.935433 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:13Z","lastTransitionTime":"2026-02-17T00:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.038444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.038490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.038506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.038522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.038534 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.141866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.141899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.141908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.141922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.141932 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.244596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.244653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.244665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.244683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.244695 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.346768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.346800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.346810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.346823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.346831 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.449140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.449189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.449208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.449230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.449246 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.552031 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.552139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.552159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.552184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.552203 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.654587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.654621 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.654629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.654644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:14 crc kubenswrapper[4805]: I0217 00:24:14.654654 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.757247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.757283 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.757294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.757310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.757321 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.764443 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:24:39.293762986 +0000 UTC Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.783724 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:15 crc kubenswrapper[4805]: E0217 00:24:14.783864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.783896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.783910 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:15 crc kubenswrapper[4805]: E0217 00:24:14.784047 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:15 crc kubenswrapper[4805]: E0217 00:24:14.784114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.802645 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.815783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.829376 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.843170 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.857902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.859551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.859571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.859581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.859595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.859606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.870359 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.882749 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.898843 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.912790 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.929370 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.946779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.962021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.962048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.962057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.962071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.962079 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:14Z","lastTransitionTime":"2026-02-17T00:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.965035 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.977287 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:14.989490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:14Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.011692 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:10Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:24:10.736381 6861 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:24:10.736419 6861 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:24:10.736459 6861 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:24:10.736470 6861 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:24:10.736575 6861 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:24:10.736598 6861 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:24:10.736619 6861 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:24:10.736643 6861 factory.go:656] Stopping watch factory\\\\nI0217 00:24:10.736661 6861 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:24:10.736713 6861 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:24:10.736746 6861 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:24:10.736749 6861 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 00:24:10.736731 6861 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:24:10.736756 6861 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:24:10.736826 6861 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:24:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:24:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:15Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.031706 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:15Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.051878 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:15Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.064893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.064941 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.064957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.064979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.064992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.168359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.168412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.168423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.168440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.168451 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.272786 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.272843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.272861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.272883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.272901 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.375883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.375948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.375966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.375991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.376009 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.478380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.478419 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.478430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.478446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.478457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.580491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.580535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.580547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.580563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.580575 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.683303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.683359 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.683368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.683383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.683393 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.764815 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:42:04.138575734 +0000 UTC Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.784237 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:15 crc kubenswrapper[4805]: E0217 00:24:15.784453 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.785711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.785758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.785777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.785799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.785824 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.889090 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.889147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.889167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.889192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.889209 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.991879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.991939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.991956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.991980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:15 crc kubenswrapper[4805]: I0217 00:24:15.991997 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:15Z","lastTransitionTime":"2026-02-17T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.094492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.094520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.094528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.094542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.094551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.197630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.197691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.197709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.197734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.197756 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.303386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.303503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.303530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.303670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.303741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.406981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.407035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.407051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.407073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.407090 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.510077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.510130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.510146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.510168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.510185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.612284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.612355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.612370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.612387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.612400 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.714781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.714832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.714848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.714871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.714888 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.765675 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:08:02.230689021 +0000 UTC Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.784161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.784241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:16 crc kubenswrapper[4805]: E0217 00:24:16.784313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:16 crc kubenswrapper[4805]: E0217 00:24:16.784435 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.784736 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:16 crc kubenswrapper[4805]: E0217 00:24:16.784980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.817140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.817179 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.817189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.817204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.817215 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.920498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.920548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.920559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.920577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:16 crc kubenswrapper[4805]: I0217 00:24:16.920590 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:16Z","lastTransitionTime":"2026-02-17T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.023963 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.024035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.024059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.024088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.024115 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.127362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.127426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.127445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.127485 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.127502 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.230431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.230499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.230517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.230544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.230561 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.333831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.333889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.333914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.333937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.333955 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.436761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.436801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.436815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.436834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.436847 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.539595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.539671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.539694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.539722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.539740 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.643230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.643406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.643449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.643477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.643497 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.747070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.747130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.747153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.747182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.747205 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.766268 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:31:56.462777851 +0000 UTC Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.783950 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:17 crc kubenswrapper[4805]: E0217 00:24:17.784430 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.850273 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.850315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.850357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.850391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.850425 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.953265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.953312 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.953357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.953379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:17 crc kubenswrapper[4805]: I0217 00:24:17.953396 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:17Z","lastTransitionTime":"2026-02-17T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.056233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.056302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.056320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.056379 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.056397 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.159296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.159383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.159400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.159425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.159440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.262194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.262255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.262282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.262308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.262366 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.365317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.365488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.365516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.365547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.365573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.469033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.469087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.469103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.469127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.469148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.572641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.572685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.572702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.572742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.572788 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.675704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.675774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.675807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.675837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.675861 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.766406 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:47:34.005625115 +0000 UTC Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.778628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.778679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.778696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.778719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.778739 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.784294 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.784320 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:18 crc kubenswrapper[4805]: E0217 00:24:18.784469 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.784550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:18 crc kubenswrapper[4805]: E0217 00:24:18.784581 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:18 crc kubenswrapper[4805]: E0217 00:24:18.784754 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.881943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.882001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.882019 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.882042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.882058 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.991120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.991199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.991216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.991235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:18 crc kubenswrapper[4805]: I0217 00:24:18.991247 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:18Z","lastTransitionTime":"2026-02-17T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.094804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.094865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.094883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.094906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.094925 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.152781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.152907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.152957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.153001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.153051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153212 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153303 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:25:23.15328248 +0000 UTC m=+149.169091908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153656 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:23.1536375 +0000 UTC m=+149.169446928 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153673 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153771 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153791 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153810 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153833 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 00:25:23.153806235 +0000 UTC m=+149.169615673 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.153909 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 00:25:23.153889467 +0000 UTC m=+149.169698935 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.154001 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.154073 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.154100 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.154245 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 00:25:23.154208727 +0000 UTC m=+149.170018325 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.197348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.197417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.197437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.197462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.197480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.300403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.300454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.300471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.300494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.300513 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.403938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.404032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.404056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.404088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.404776 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.507674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.507776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.507796 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.507821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.507878 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.610821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.610856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.610865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.610878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.610886 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.713613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.713663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.713674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.713690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.713702 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.766545 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:26:04.801694429 +0000 UTC Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.783962 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:19 crc kubenswrapper[4805]: E0217 00:24:19.784130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.817218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.817383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.817407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.817464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.817487 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.921225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.921280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.921295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.921316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:19 crc kubenswrapper[4805]: I0217 00:24:19.921368 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:19Z","lastTransitionTime":"2026-02-17T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.025014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.025089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.025127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.025161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.025184 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.128261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.128378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.128405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.128434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.128457 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.231275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.231360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.231384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.231407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.231424 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.334274 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.334389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.334416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.334448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.334473 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.436704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.436738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.436747 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.436760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.436770 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.539372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.539434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.539493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.539611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.539638 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.641895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.641940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.641951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.641967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.641978 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.745791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.745862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.745883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.745913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.745935 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.767042 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:16:17.483180329 +0000 UTC Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.783737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.783823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:20 crc kubenswrapper[4805]: E0217 00:24:20.783908 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.783830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:20 crc kubenswrapper[4805]: E0217 00:24:20.784029 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:20 crc kubenswrapper[4805]: E0217 00:24:20.784169 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.849002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.849072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.849089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.849113 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.849132 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.952104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.952181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.952206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.952237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:20 crc kubenswrapper[4805]: I0217 00:24:20.952259 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:20Z","lastTransitionTime":"2026-02-17T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.055242 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.055300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.055311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.055355 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.055369 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.158728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.158847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.158865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.158889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.158904 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.261465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.261510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.261521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.261536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.261549 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.363655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.363686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.363693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.363706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.363714 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.470954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.470996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.471009 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.471025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.471037 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.574157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.574556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.574691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.574850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.574980 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.678569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.678662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.678696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.678726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.678749 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.767750 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:53:07.066705024 +0000 UTC Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.782152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.782213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.782231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.782259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.782278 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.783637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:21 crc kubenswrapper[4805]: E0217 00:24:21.784024 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.885074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.885441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.885620 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.885769 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.885990 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.989055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.989117 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.989135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.989161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:21 crc kubenswrapper[4805]: I0217 00:24:21.989178 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:21Z","lastTransitionTime":"2026-02-17T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.092210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.092260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.092272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.092290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.092302 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.117695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.118032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.118207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.118349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.118468 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.140453 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.146503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.146597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.146634 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.146667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.146693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.164119 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.169444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.169486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.169502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.169522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.169535 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.185906 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.191755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.191805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.191822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.191843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.191858 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.208039 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.212669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.212751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.212778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.212810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.212836 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.234538 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:22Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.234815 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.237545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.237588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.237600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.237618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.237631 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.340588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.340638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.340653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.340676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.340692 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.443760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.443813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.443830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.443853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.443869 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.547451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.547493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.547534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.547550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.547561 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.650916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.651002 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.651021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.651044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.651062 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.753874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.753933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.753950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.753975 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.753992 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.768564 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:09:06.982659759 +0000 UTC Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.784028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.784093 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.784047 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.784213 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.784407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:22 crc kubenswrapper[4805]: E0217 00:24:22.784531 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.857231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.857280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.857296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.857316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.857408 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.960382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.960455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.960475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.960501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:22 crc kubenswrapper[4805]: I0217 00:24:22.960522 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:22Z","lastTransitionTime":"2026-02-17T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.062985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.063038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.063055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.063079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.063095 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.165611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.165647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.165660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.165675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.165688 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.268586 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.268622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.268630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.268642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.268650 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.371489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.371552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.371562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.371577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.371587 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.474302 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.474377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.474389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.474405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.474415 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.577248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.577292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.577310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.577346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.577358 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.680516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.680595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.680618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.680652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.680674 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.769376 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:02:53.450143809 +0000 UTC Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:23 crc kubenswrapper[4805]: E0217 00:24:23.783786 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.783839 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.784999 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:24:23 crc kubenswrapper[4805]: E0217 00:24:23.785263 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.886976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.887036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.887053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.887078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.887096 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.989996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.990061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.990078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.990100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:23 crc kubenswrapper[4805]: I0217 00:24:23.990120 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:23Z","lastTransitionTime":"2026-02-17T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.093535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.093633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.093654 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.093676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.093693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.197269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.197372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.197391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.197422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.197440 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.300589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.300647 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.300664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.300687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.300705 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.404255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.404363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.404388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.404416 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.404439 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.507851 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.507915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.507933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.507957 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.507976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.610955 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.611046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.611064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.611085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.611102 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.714530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.714613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.714636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.714667 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.714693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.770574 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:32:21.848635466 +0000 UTC Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.784231 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.784316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:24 crc kubenswrapper[4805]: E0217 00:24:24.784416 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.784478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:24 crc kubenswrapper[4805]: E0217 00:24:24.784600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:24 crc kubenswrapper[4805]: E0217 00:24:24.784645 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.817653 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d9024ef-7937-42b2-8fbc-60db984b9a2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:10Z\\\",\\\"message\\\":\\\"8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0217 00:24:10.736381 6861 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0217 00:24:10.736419 6861 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0217 00:24:10.736459 6861 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 00:24:10.736470 6861 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 00:24:10.736575 6861 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 00:24:10.736598 6861 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 00:24:10.736619 6861 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0217 00:24:10.736643 6861 factory.go:656] Stopping watch factory\\\\nI0217 00:24:10.736661 6861 ovnkube.go:599] Stopped ovnkube\\\\nI0217 00:24:10.736713 6861 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 00:24:10.736746 6861 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0217 00:24:10.736749 6861 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 00:24:10.736731 6861 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 00:24:10.736756 6861 handler.go:208] Removed *v1.Node event handler 2\\\\nI0217 00:24:10.736826 6861 handler.go:208] Removed *v1.Node event handler 7\\\\nI0217 00:24:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:24:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfgww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tbr6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.818785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.818837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.818857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.818881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.818899 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.837389 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edd8b0f3-aa38-48b3-91c8-279765c1f3c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a496b8b19afce1e0e394bfb1b259f3c65d87e9abf99ab9b2b104dd114cb88b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fe700a30f7a8fa5a69a0807852966334fb53c986bd5d4132f57e007c757f78a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d4770ec9854fbcb9bbdef9d70a7ad16c9165c26724840ad00873c059f6e49b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.853697 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e41751a-6cbb-4333-8384-ab48022560f2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35cb7f78f2c4171a849affbcb15fd06276969fb335a227f536fb43cff251872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6c606117277077af4108de0b9bbae3f0333b8109ce1ac898cea87277d56edb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf126db3d482efbecea6828dc760735e023947be7a839fbda4a46382e20ca834\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc21ea6478e2ad150cdbe56d21fb77f355b005dd7411ee47e5ca337bcff08150\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.869419 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abceafb24527d7dc245b59481e53f316bb19ae5a277fc2df9b159467ed9145fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.882754 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.897205 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54d675475e2176bea51eb9ef208ee8dbe0e3747b9c73d0df805ed32c1f5c198f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fc12f1fd5a75f71a8f41f64d586ee2809aa440e8756898dc7758f267113806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.905992 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-m6rzz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56d5f74f-1f28-476b-9308-e6a93af909eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7c80c6ce3ea28c2b8c84808bad05f5049b00c26f752c485f5cbcd085b4f9c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbc29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-m6rzz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.916402 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:22:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0217 00:23:14.241580 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:23:14.241773 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 00:23:14.242746 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1495713542/tls.crt::/tmp/serving-cert-1495713542/tls.key\\\\\\\"\\\\nI0217 00:23:14.630255 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:23:15.175238 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:23:15.175280 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:23:15.175364 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:23:15.175381 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:23:15.192720 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:23:15.192752 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:23:15.192760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0217 00:23:15.192755 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0217 00:23:15.192767 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:23:15.192799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:23:15.192804 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:23:15.192808 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0217 00:23:15.194984 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:59Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:22:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:22:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:22:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.920603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.920629 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.920637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.920648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.920657 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:24Z","lastTransitionTime":"2026-02-17T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.929677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03ce26a-37aa-4bc4-8057-f1f9c158868b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91ec02d07cfb616e7c8bf0181ddaf95d90bdde7e4b966fced010d8766bb62ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fef1a7f5202d7741fcb0022732d9fd7fbfbcbbfa60760287e6a16f0422997be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88b04adfd01c81829bfe0544159967e0e905611cd1ccd58e4a5428a8a2e3c812\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28bc4257798bb30cadad842b58c0d6a4eff33ca80c6b6a8afc61e04d686fb72b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f575964bd638a8fdb22b8703fff0537d6762e7b80613f90cda6d7eded6a91b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb38c812a731377613c9d50baf69ef30681613926ab52f08d3bb13e80c75c128\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3abc71f6bd6f33aec518aafb501fd06b7566a7766c5230a8f17e8d6b5b76e01e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:23:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-npqgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5lvnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.939159 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jnv59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86b8a270-8cb3-4266-9fe0-3cfd027a9174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6cccv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jnv59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.950557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.960909 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb16fd729192934329b03757bc62f8b745370d847fa3a733b7e2ac90fe003e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.969509 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2531e0b8-5ad4-4dd3-86b9-bd6dec526041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7dec361b2e72119c1554a4faa65cb3872e093353de731f3a3fb546c9dc74516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzxtj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ckkzk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.978993 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-86xnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dee9dbb9-55c3-4c05-b86a-e889213c20b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43c1dd1445e7ee8725f75ab21d357a0cc8d3e4314e38d0ad5148379f42c9a524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fg8n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-86xnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:24 crc kubenswrapper[4805]: I0217 00:24:24.989088 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57d20f37-b784-4cc1-8f0d-fbfbe640f0e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d908c963bb9fc4136f43c734d16db343c03c7bda3b5053febfcbd21d4661005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf2b1e86d2327e3ff7d3f1999ce970936bb964c7efb3af9dbd74104fabae812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-745dv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jlmnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.001293 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:24Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.015384 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lk6fw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da6b304-e28f-4666-817f-06bcc107e3fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T00:24:02Z\\\",\\\"message\\\":\\\"2026-02-17T00:23:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf\\\\n2026-02-17T00:23:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8c92300b-8851-47ed-ac69-64b990176eaf to /host/opt/cni/bin/\\\\n2026-02-17T00:23:17Z [verbose] multus-daemon started\\\\n2026-02-17T00:23:17Z [verbose] Readiness Indicator file check\\\\n2026-02-17T00:24:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:23:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxpp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lk6fw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:25Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.023119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.023148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.023163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.023183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.023197 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.127947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.128028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.128066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.128098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.128121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.230989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.231057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.231080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.231107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.231129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.334237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.334308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.334365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.334397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.334420 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.437701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.437836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.437861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.437927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.437944 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.541381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.541437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.541454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.541478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.541495 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.644264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.644354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.644380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.644410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.644431 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.747710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.747778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.747804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.747831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.747852 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.771172 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:24:36.69382991 +0000 UTC Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.784584 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:25 crc kubenswrapper[4805]: E0217 00:24:25.784973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.799462 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.850107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.850156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.850173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.850198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.850215 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.953375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.953443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.953465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.953496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:25 crc kubenswrapper[4805]: I0217 00:24:25.953518 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:25Z","lastTransitionTime":"2026-02-17T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.056608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.056668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.056691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.056719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.056740 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.158753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.158818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.158840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.158872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.158895 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.261766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.261823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.261840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.261864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.261881 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.364751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.364802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.364818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.364840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.364856 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.468440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.468484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.468500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.468525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.468542 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.571458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.571607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.571636 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.571660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.571685 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.674295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.674360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.674371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.674386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.674396 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.771826 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:03:38.405113336 +0000 UTC Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.776651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.776685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.776694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.776710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.776719 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.784211 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.784243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:26 crc kubenswrapper[4805]: E0217 00:24:26.784301 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:26 crc kubenswrapper[4805]: E0217 00:24:26.784414 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.784587 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:26 crc kubenswrapper[4805]: E0217 00:24:26.784637 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.878961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.879015 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.879030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.879051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.879065 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.982094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.982161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.982185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.982215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:26 crc kubenswrapper[4805]: I0217 00:24:26.982238 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:26Z","lastTransitionTime":"2026-02-17T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.085624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.085683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.085701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.085725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.085756 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.189954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.190004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.190021 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.190047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.190063 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.292944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.292993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.293017 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.293044 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.293063 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.396183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.396231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.396247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.396268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.396286 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.499781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.499828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.499844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.499865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.499882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.602899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.602977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.603000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.603029 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.603050 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.705461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.705531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.705549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.705572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.705591 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.773290 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:42:05.881991869 +0000 UTC Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.783839 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:27 crc kubenswrapper[4805]: E0217 00:24:27.784008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.808706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.808767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.808784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.808804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.808821 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.911473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.911579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.911638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.911663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:27 crc kubenswrapper[4805]: I0217 00:24:27.911679 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:27Z","lastTransitionTime":"2026-02-17T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.014249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.014316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.014362 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.014410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.014429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.117356 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.117404 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.117443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.117471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.117493 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.220258 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.220299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.220315 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.220348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.220359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.323394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.323465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.323503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.323597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.323803 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.427104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.427196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.427213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.427237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.427253 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.529606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.529664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.529687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.529712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.529731 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.632671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.632741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.632763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.632792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.632813 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.735793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.735867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.735903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.735933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.735957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.774424 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:01:58.360599215 +0000 UTC Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.783820 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.783918 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.783836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:28 crc kubenswrapper[4805]: E0217 00:24:28.784052 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:28 crc kubenswrapper[4805]: E0217 00:24:28.784131 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:28 crc kubenswrapper[4805]: E0217 00:24:28.784246 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.838817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.838883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.838906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.838936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.838960 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.941633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.941685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.941698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.941718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:28 crc kubenswrapper[4805]: I0217 00:24:28.941732 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:28Z","lastTransitionTime":"2026-02-17T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.045018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.045064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.045073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.045088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.045098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.148091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.148144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.148161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.148182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.148198 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.251204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.251617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.251673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.251709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.251734 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.354777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.354840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.354857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.354882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.354899 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.457890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.457967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.457993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.458026 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.458048 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.559871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.559925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.559938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.559954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.559965 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.662951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.663010 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.663028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.663054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.663073 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.771990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.772030 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.772042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.772057 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.772068 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.774537 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:25:34.536583358 +0000 UTC Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.783803 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:29 crc kubenswrapper[4805]: E0217 00:24:29.783948 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.874965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.875032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.875049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.875080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.875097 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.978570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.978632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.978651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.978675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:29 crc kubenswrapper[4805]: I0217 00:24:29.978693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:29Z","lastTransitionTime":"2026-02-17T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.081363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.081423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.081445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.081476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.081497 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.184892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.184971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.184994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.185022 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.185042 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.287607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.287671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.287729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.287758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.287779 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.390865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.390965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.390991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.391018 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.391039 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.493815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.493877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.493895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.493917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.493933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.597103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.597167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.597185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.597208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.597225 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.700477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.700589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.700609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.700661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.700682 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.775594 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:24:07.822965441 +0000 UTC Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.784064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.784177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:30 crc kubenswrapper[4805]: E0217 00:24:30.784255 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:30 crc kubenswrapper[4805]: E0217 00:24:30.784384 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.784423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:30 crc kubenswrapper[4805]: E0217 00:24:30.784595 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.803282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.803395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.803413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.803435 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.803454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.906574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.906665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.906681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.906709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:30 crc kubenswrapper[4805]: I0217 00:24:30.906727 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:30Z","lastTransitionTime":"2026-02-17T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.009857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.009925 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.009942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.009969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.009988 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.112072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.112114 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.112124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.112139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.112149 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.214909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.214971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.214980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.214994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.215002 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.317071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.317164 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.317183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.317206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.317225 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.420227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.420295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.420311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.420365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.420384 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.523385 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.523443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.523460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.523483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.523500 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.626666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.626741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.626764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.626787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.626804 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.729928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.729987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.730007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.730034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.730051 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.776712 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:15:07.963793037 +0000 UTC Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.784039 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:31 crc kubenswrapper[4805]: E0217 00:24:31.784216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.833581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.833635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.833652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.833674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.833691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.938106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.938194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.938211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.938234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:31 crc kubenswrapper[4805]: I0217 00:24:31.938276 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:31Z","lastTransitionTime":"2026-02-17T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.041294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.041374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.041386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.041402 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.041429 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.143939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.144035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.144054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.144118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.144148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.246890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.246990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.247014 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.247080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.247111 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.313250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.313310 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.313363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.313392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.313413 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.335941 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.340890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.340939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.340956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.340978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.340995 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.358125 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.362077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.362132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.362149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.362172 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.362189 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.379923 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.385093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.385152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.385168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.385189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.385207 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.403415 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.409235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.409284 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.409296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.409313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.409345 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.426890 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:24:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0a3d29de-c011-49cf-a4c7-02d3c97ac2d5\\\",\\\"systemUUID\\\":\\\"c46f5e1f-50b9-4331-9140-c12e3ad03920\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T00:24:32Z is after 2025-08-24T17:21:41Z" Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.427183 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.429564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.429632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.429652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.429686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.429711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.533097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.534375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.534425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.534444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.534456 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.637622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.637669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.637687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.637708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.637724 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.741116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.741180 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.741239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.741301 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.741355 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.777393 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:30:43.961354699 +0000 UTC Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.783704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.783791 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.783911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.784004 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.784079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:32 crc kubenswrapper[4805]: E0217 00:24:32.784163 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.843967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.844040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.844065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.844095 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.844117 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.948197 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.948277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.948303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.948367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:32 crc kubenswrapper[4805]: I0217 00:24:32.948392 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:32Z","lastTransitionTime":"2026-02-17T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.051777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.051899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.051921 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.051945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.051961 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.155214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.155275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.155293 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.155314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.155359 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.258457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.258511 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.258532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.258559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.258582 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.362008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.362067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.362085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.362110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.362127 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.464857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.464902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.464913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.464930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.464942 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.568593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.568661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.568689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.568718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.568739 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.671477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.671555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.671581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.671611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.671634 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.715719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:33 crc kubenswrapper[4805]: E0217 00:24:33.716007 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:24:33 crc kubenswrapper[4805]: E0217 00:24:33.716155 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs podName:86b8a270-8cb3-4266-9fe0-3cfd027a9174 nodeName:}" failed. No retries permitted until 2026-02-17 00:25:37.716132804 +0000 UTC m=+163.731942242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs") pod "network-metrics-daemon-jnv59" (UID: "86b8a270-8cb3-4266-9fe0-3cfd027a9174") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.775155 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.775218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.775234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.775257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.775274 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.778563 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:46:40.681833203 +0000 UTC Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.783969 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:33 crc kubenswrapper[4805]: E0217 00:24:33.784436 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.878974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.879033 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.879049 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.879072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.879093 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.982369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.982430 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.982447 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.982473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:33 crc kubenswrapper[4805]: I0217 00:24:33.982490 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:33Z","lastTransitionTime":"2026-02-17T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.089827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.090765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.090836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.090871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.090889 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.194660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.194748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.194774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.194809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.194833 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.297736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.297799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.297821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.297853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.297876 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.400987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.401045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.401065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.401092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.401115 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.504626 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.504689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.504707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.504732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.504750 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.607678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.607740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.607757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.607781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.607802 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.712102 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.712166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.712183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.712206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.712223 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.779151 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:43:14.936764982 +0000 UTC Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.783919 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.784018 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.784028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:34 crc kubenswrapper[4805]: E0217 00:24:34.784153 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:34 crc kubenswrapper[4805]: E0217 00:24:34.784306 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:34 crc kubenswrapper[4805]: E0217 00:24:34.784708 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.815832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.815883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.815900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.815967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.815989 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.821407 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.821376808 podStartE2EDuration="1m19.821376808s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.821262345 +0000 UTC m=+100.837071793" watchObservedRunningTime="2026-02-17 00:24:34.821376808 +0000 UTC m=+100.837186246" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.871531 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5lvnd" podStartSLOduration=79.871501236 podStartE2EDuration="1m19.871501236s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.853506336 +0000 UTC m=+100.869315824" watchObservedRunningTime="2026-02-17 00:24:34.871501236 +0000 UTC m=+100.887310644" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.885594 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.885567542 podStartE2EDuration="9.885567542s" podCreationTimestamp="2026-02-17 00:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.872062312 +0000 UTC m=+100.887871720" watchObservedRunningTime="2026-02-17 00:24:34.885567542 +0000 UTC m=+100.901376970" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.905578 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podStartSLOduration=79.90556138 podStartE2EDuration="1m19.90556138s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.905507078 +0000 UTC m=+100.921316496" watchObservedRunningTime="2026-02-17 00:24:34.90556138 +0000 UTC m=+100.921370778" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.914129 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-86xnz" podStartSLOduration=79.914108807 podStartE2EDuration="1m19.914108807s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.913838559 +0000 UTC m=+100.929647977" watchObservedRunningTime="2026-02-17 00:24:34.914108807 +0000 UTC m=+100.929918215" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.917849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.917887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.917899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.917916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.917927 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:34Z","lastTransitionTime":"2026-02-17T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.924707 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jlmnt" podStartSLOduration=79.924688532 podStartE2EDuration="1m19.924688532s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.92426832 +0000 UTC m=+100.940077728" watchObservedRunningTime="2026-02-17 00:24:34.924688532 +0000 UTC m=+100.940497940" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.953917 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lk6fw" podStartSLOduration=79.953897306 podStartE2EDuration="1m19.953897306s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.953665199 +0000 UTC m=+100.969474607" watchObservedRunningTime="2026-02-17 00:24:34.953897306 +0000 UTC m=+100.969706704" Feb 17 00:24:34 crc kubenswrapper[4805]: I0217 00:24:34.976503 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.976478768 podStartE2EDuration="47.976478768s" podCreationTimestamp="2026-02-17 00:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:34.975930662 +0000 UTC m=+100.991740060" watchObservedRunningTime="2026-02-17 00:24:34.976478768 +0000 UTC m=+100.992288186" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.020566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.020609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.020623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.020640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.020653 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.024536 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-m6rzz" podStartSLOduration=80.024507985 podStartE2EDuration="1m20.024507985s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:35.024262968 +0000 UTC m=+101.040072386" watchObservedRunningTime="2026-02-17 00:24:35.024507985 +0000 UTC m=+101.040317423" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.123486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.123533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.123544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.123563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.123576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.225787 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.225847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.225870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.225897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.225919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.329046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.329125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.329153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.329181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.329199 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.432175 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.432241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.432261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.432287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.432304 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.535036 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.535109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.535131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.535161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.535185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.638209 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.638267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.638289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.638318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.638373 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.741004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.741064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.741080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.741104 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.741121 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.780081 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:36:33.406994533 +0000 UTC Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.784539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:35 crc kubenswrapper[4805]: E0217 00:24:35.785012 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.804470 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.804441992 podStartE2EDuration="1m17.804441992s" podCreationTimestamp="2026-02-17 00:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:35.070420002 +0000 UTC m=+101.086229410" watchObservedRunningTime="2026-02-17 00:24:35.804441992 +0000 UTC m=+101.820251430" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.805826 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.851642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.851688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.851706 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.851730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.851747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.955709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.955759 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.955770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.955788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:35 crc kubenswrapper[4805]: I0217 00:24:35.955801 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:35Z","lastTransitionTime":"2026-02-17T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.058549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.058603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.058612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.058628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.058636 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.161699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.161774 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.161790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.161817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.161835 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.264841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.264911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.264928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.264952 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.264970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.367436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.367499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.367520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.367545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.367562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.470843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.470894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.470914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.470942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.470965 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.574260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.574367 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.574386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.574409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.574428 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.677496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.677561 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.677582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.677611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.677633 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780430 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:46:50.165375588 +0000 UTC Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780942 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.780958 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.784604 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.784651 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:36 crc kubenswrapper[4805]: E0217 00:24:36.784760 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.784915 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:36 crc kubenswrapper[4805]: E0217 00:24:36.785095 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:36 crc kubenswrapper[4805]: E0217 00:24:36.785179 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.883308 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.883420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.883445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.883475 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.883499 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.986221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.986287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.986307 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.986377 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:36 crc kubenswrapper[4805]: I0217 00:24:36.986436 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:36Z","lastTransitionTime":"2026-02-17T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.090431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.090517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.090534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.090556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.090573 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.194551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.194614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.194632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.194658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.194678 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.297103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.297163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.297204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.297228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.297244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.400255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.400370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.400400 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.400431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.400454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.502910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.502969 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.502985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.503008 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.503026 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.605681 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.605780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.605811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.605841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.605861 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.709522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.709641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.709665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.709696 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.709718 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.781707 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:59:22.621162 +0000 UTC Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.784199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:37 crc kubenswrapper[4805]: E0217 00:24:37.784473 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.785548 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:24:37 crc kubenswrapper[4805]: E0217 00:24:37.785809 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tbr6r_openshift-ovn-kubernetes(8d9024ef-7937-42b2-8fbc-60db984b9a2f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.812804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.812867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.812884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.812905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.812923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.915927 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.915984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.916000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.916023 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:37 crc kubenswrapper[4805]: I0217 00:24:37.916040 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:37Z","lastTransitionTime":"2026-02-17T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.018810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.018867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.018887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.018913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.018932 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.121949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.121999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.122016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.122039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.122056 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.225375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.225441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.225464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.225488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.225504 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.328076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.328142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.328161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.328222 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.328244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.430839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.430902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.430917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.430936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.430951 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.533665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.533717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.533734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.533757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.533774 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.637638 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.637692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.637709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.637730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.637746 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.740784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.740850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.740872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.740903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.740923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.782771 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 20:52:27.700801764 +0000 UTC Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.784123 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.784146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.784358 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:38 crc kubenswrapper[4805]: E0217 00:24:38.784505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:38 crc kubenswrapper[4805]: E0217 00:24:38.784598 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:38 crc kubenswrapper[4805]: E0217 00:24:38.784711 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.844913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.844965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.844980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.844999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.845012 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.948405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.948468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.948489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.948520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:38 crc kubenswrapper[4805]: I0217 00:24:38.948539 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:38Z","lastTransitionTime":"2026-02-17T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.052012 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.052058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.052067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.052083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.052093 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.154847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.154915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.154935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.154959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.154973 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.258545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.258592 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.258604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.258625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.258642 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.361428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.361478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.361491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.361508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.361523 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.464103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.464173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.464195 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.464223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.464244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.567086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.567153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.567170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.567196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.567215 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.670003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.670040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.670054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.670076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.670091 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.773839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.773926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.773954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.773983 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.774005 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.783355 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:29:29.999650682 +0000 UTC Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.783642 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:39 crc kubenswrapper[4805]: E0217 00:24:39.783830 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.877260 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.877373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.877399 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.877425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.877443 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.980798 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.981135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.981352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.981534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:39 crc kubenswrapper[4805]: I0217 00:24:39.981684 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:39Z","lastTransitionTime":"2026-02-17T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.085812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.086243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.086749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.086966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.087174 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.190584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.190974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.191130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.191281 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.191549 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.294693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.294762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.294785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.294813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.294834 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.398174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.398248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.398271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.398303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.398357 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.501962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.502020 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.502043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.502069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.502089 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.604516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.604588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.604610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.604637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.604658 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.707025 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.707084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.707100 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.707121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.707136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.784077 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:38:55.656479599 +0000 UTC Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.784475 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:40 crc kubenswrapper[4805]: E0217 00:24:40.784642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.784964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:40 crc kubenswrapper[4805]: E0217 00:24:40.785091 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.785422 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:40 crc kubenswrapper[4805]: E0217 00:24:40.785624 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.812509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.812575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.812596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.812617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.812632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.914870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.914935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.914953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.914977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:40 crc kubenswrapper[4805]: I0217 00:24:40.914995 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:40Z","lastTransitionTime":"2026-02-17T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.017560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.017632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.017656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.017688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.017708 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.120604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.120662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.120680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.120701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.120720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.223784 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.223852 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.223907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.223938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.223963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.330271 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.330373 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.330410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.330440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.330465 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.433276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.433388 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.433409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.433433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.433451 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.536809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.536867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.536883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.536906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.536924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.640004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.640122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.640141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.640215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.640283 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.743964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.744048 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.744073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.744107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.744130 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.784391 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:08:06.954760712 +0000 UTC Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.784591 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:41 crc kubenswrapper[4805]: E0217 00:24:41.784735 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.847690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.847753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.848016 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.848081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.848107 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.951181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.951263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.951288 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.951318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:41 crc kubenswrapper[4805]: I0217 00:24:41.951374 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:41Z","lastTransitionTime":"2026-02-17T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.054439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.054502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.054520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.054547 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.054567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.158080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.158135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.158152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.158174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.158190 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.261993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.262116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.262143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.262171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.262193 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.365980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.366054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.366074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.366115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.366138 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.469185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.469247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.469264 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.469289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.469306 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.572303 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.572393 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.572411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.572436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.572454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.616420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.616514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.616541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.616608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.616633 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:24:42Z","lastTransitionTime":"2026-02-17T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.671568 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p"] Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.672473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.674952 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.674992 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.675294 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.675446 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.711775 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.711755535 podStartE2EDuration="7.711755535s" podCreationTimestamp="2026-02-17 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:42.711086236 +0000 UTC m=+108.726895654" watchObservedRunningTime="2026-02-17 00:24:42.711755535 +0000 UTC m=+108.727564933" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.783758 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.783842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:42 crc kubenswrapper[4805]: E0217 00:24:42.783883 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:42 crc kubenswrapper[4805]: E0217 00:24:42.784019 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.784082 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:42 crc kubenswrapper[4805]: E0217 00:24:42.784243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.784617 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:43:42.687121408 +0000 UTC Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.784662 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.794142 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.812837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.812919 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f6d4933-5f51-42d4-bc21-fc75e8e76714-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.812945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.812968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f6d4933-5f51-42d4-bc21-fc75e8e76714-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.813059 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f6d4933-5f51-42d4-bc21-fc75e8e76714-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.913914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f6d4933-5f51-42d4-bc21-fc75e8e76714-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.914643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.914729 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f6d4933-5f51-42d4-bc21-fc75e8e76714-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.914819 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f6d4933-5f51-42d4-bc21-fc75e8e76714-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.914883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.914973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.915020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3f6d4933-5f51-42d4-bc21-fc75e8e76714-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.917053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3f6d4933-5f51-42d4-bc21-fc75e8e76714-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.927571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f6d4933-5f51-42d4-bc21-fc75e8e76714-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.944105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f6d4933-5f51-42d4-bc21-fc75e8e76714-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gjl9p\" (UID: \"3f6d4933-5f51-42d4-bc21-fc75e8e76714\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:42 crc kubenswrapper[4805]: I0217 00:24:42.988261 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" Feb 17 00:24:43 crc kubenswrapper[4805]: W0217 00:24:43.003966 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f6d4933_5f51_42d4_bc21_fc75e8e76714.slice/crio-ee11e0df99ca9db33869286e9b8135808d7a43ed107b5ab8786f657fac2b6b91 WatchSource:0}: Error finding container ee11e0df99ca9db33869286e9b8135808d7a43ed107b5ab8786f657fac2b6b91: Status 404 returned error can't find the container with id ee11e0df99ca9db33869286e9b8135808d7a43ed107b5ab8786f657fac2b6b91 Feb 17 00:24:43 crc kubenswrapper[4805]: I0217 00:24:43.688212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" event={"ID":"3f6d4933-5f51-42d4-bc21-fc75e8e76714","Type":"ContainerStarted","Data":"1e60d6b00bf573a9edf998f24ecce6a6418f362d350c074cb090ae10b1fbcc1f"} Feb 17 00:24:43 crc kubenswrapper[4805]: I0217 00:24:43.688582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" event={"ID":"3f6d4933-5f51-42d4-bc21-fc75e8e76714","Type":"ContainerStarted","Data":"ee11e0df99ca9db33869286e9b8135808d7a43ed107b5ab8786f657fac2b6b91"} Feb 17 00:24:43 crc kubenswrapper[4805]: I0217 00:24:43.710361 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gjl9p" podStartSLOduration=88.710296147 podStartE2EDuration="1m28.710296147s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:43.708764063 +0000 UTC m=+109.724573501" watchObservedRunningTime="2026-02-17 00:24:43.710296147 +0000 UTC m=+109.726105585" Feb 17 00:24:43 crc kubenswrapper[4805]: I0217 00:24:43.784424 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:43 crc kubenswrapper[4805]: E0217 00:24:43.784630 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:44 crc kubenswrapper[4805]: I0217 00:24:44.783895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:44 crc kubenswrapper[4805]: I0217 00:24:44.784027 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:44 crc kubenswrapper[4805]: I0217 00:24:44.784063 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:44 crc kubenswrapper[4805]: E0217 00:24:44.785945 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:44 crc kubenswrapper[4805]: E0217 00:24:44.786055 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:44 crc kubenswrapper[4805]: E0217 00:24:44.786162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:45 crc kubenswrapper[4805]: I0217 00:24:45.784047 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:45 crc kubenswrapper[4805]: E0217 00:24:45.784624 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:46 crc kubenswrapper[4805]: I0217 00:24:46.784452 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:46 crc kubenswrapper[4805]: I0217 00:24:46.784500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:46 crc kubenswrapper[4805]: I0217 00:24:46.784500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:46 crc kubenswrapper[4805]: E0217 00:24:46.784690 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:46 crc kubenswrapper[4805]: E0217 00:24:46.784798 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:46 crc kubenswrapper[4805]: E0217 00:24:46.785019 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:47 crc kubenswrapper[4805]: I0217 00:24:47.784500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:47 crc kubenswrapper[4805]: E0217 00:24:47.784642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:48 crc kubenswrapper[4805]: I0217 00:24:48.783826 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:48 crc kubenswrapper[4805]: I0217 00:24:48.783895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:48 crc kubenswrapper[4805]: I0217 00:24:48.783839 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:48 crc kubenswrapper[4805]: E0217 00:24:48.784056 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:48 crc kubenswrapper[4805]: E0217 00:24:48.784167 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:48 crc kubenswrapper[4805]: E0217 00:24:48.784309 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.709863 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/1.log" Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.710673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/0.log" Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.710748 4805 generic.go:334] "Generic (PLEG): container finished" podID="5da6b304-e28f-4666-817f-06bcc107e3fe" containerID="dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f" exitCode=1 Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.710793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerDied","Data":"dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f"} Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.710840 4805 scope.go:117] "RemoveContainer" containerID="5fc97c3e7e2c23e4520670ced3393ee3c1d74e35bf8b9d10bd1277faf9d2867d" Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.711458 4805 scope.go:117] "RemoveContainer" containerID="dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f" Feb 17 00:24:49 crc kubenswrapper[4805]: E0217 00:24:49.711776 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lk6fw_openshift-multus(5da6b304-e28f-4666-817f-06bcc107e3fe)\"" pod="openshift-multus/multus-lk6fw" podUID="5da6b304-e28f-4666-817f-06bcc107e3fe" Feb 17 00:24:49 crc kubenswrapper[4805]: I0217 00:24:49.784161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:49 crc kubenswrapper[4805]: E0217 00:24:49.784505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:50 crc kubenswrapper[4805]: I0217 00:24:50.717144 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/1.log" Feb 17 00:24:50 crc kubenswrapper[4805]: I0217 00:24:50.784035 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:50 crc kubenswrapper[4805]: I0217 00:24:50.784140 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:50 crc kubenswrapper[4805]: E0217 00:24:50.784211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:50 crc kubenswrapper[4805]: I0217 00:24:50.784236 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:50 crc kubenswrapper[4805]: E0217 00:24:50.784381 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:50 crc kubenswrapper[4805]: E0217 00:24:50.784472 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:50 crc kubenswrapper[4805]: I0217 00:24:50.786134 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.723022 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/3.log" Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.727026 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerStarted","Data":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.727628 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.777754 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podStartSLOduration=96.777731278 podStartE2EDuration="1m36.777731278s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:51.776956565 +0000 UTC m=+117.792765973" watchObservedRunningTime="2026-02-17 00:24:51.777731278 +0000 UTC m=+117.793540716" Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.784189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:51 crc kubenswrapper[4805]: E0217 00:24:51.784688 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:51 crc kubenswrapper[4805]: I0217 00:24:51.873253 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jnv59"] Feb 17 00:24:52 crc kubenswrapper[4805]: I0217 00:24:52.730888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:52 crc kubenswrapper[4805]: E0217 00:24:52.731120 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:52 crc kubenswrapper[4805]: I0217 00:24:52.783904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:52 crc kubenswrapper[4805]: I0217 00:24:52.784059 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:52 crc kubenswrapper[4805]: E0217 00:24:52.784236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:52 crc kubenswrapper[4805]: I0217 00:24:52.784563 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:52 crc kubenswrapper[4805]: E0217 00:24:52.784675 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:52 crc kubenswrapper[4805]: E0217 00:24:52.784905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:54 crc kubenswrapper[4805]: E0217 00:24:54.766690 4805 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 00:24:54 crc kubenswrapper[4805]: I0217 00:24:54.783973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:54 crc kubenswrapper[4805]: I0217 00:24:54.789066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:54 crc kubenswrapper[4805]: I0217 00:24:54.789103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:54 crc kubenswrapper[4805]: I0217 00:24:54.789189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:54 crc kubenswrapper[4805]: E0217 00:24:54.789313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:54 crc kubenswrapper[4805]: E0217 00:24:54.789450 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:54 crc kubenswrapper[4805]: E0217 00:24:54.789540 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:54 crc kubenswrapper[4805]: E0217 00:24:54.789887 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:55 crc kubenswrapper[4805]: E0217 00:24:55.191254 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:24:56 crc kubenswrapper[4805]: I0217 00:24:56.783684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:56 crc kubenswrapper[4805]: I0217 00:24:56.783684 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:56 crc kubenswrapper[4805]: I0217 00:24:56.783912 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:56 crc kubenswrapper[4805]: E0217 00:24:56.784001 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:56 crc kubenswrapper[4805]: E0217 00:24:56.783838 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:24:56 crc kubenswrapper[4805]: E0217 00:24:56.784114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:56 crc kubenswrapper[4805]: I0217 00:24:56.784646 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:56 crc kubenswrapper[4805]: E0217 00:24:56.784861 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:58 crc kubenswrapper[4805]: I0217 00:24:58.784440 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:24:58 crc kubenswrapper[4805]: I0217 00:24:58.784490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:24:58 crc kubenswrapper[4805]: I0217 00:24:58.784452 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:24:58 crc kubenswrapper[4805]: E0217 00:24:58.784642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:24:58 crc kubenswrapper[4805]: E0217 00:24:58.784844 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:24:58 crc kubenswrapper[4805]: I0217 00:24:58.784998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:24:58 crc kubenswrapper[4805]: E0217 00:24:58.785307 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:24:58 crc kubenswrapper[4805]: E0217 00:24:58.786250 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:25:00 crc kubenswrapper[4805]: E0217 00:25:00.192936 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:25:00 crc kubenswrapper[4805]: I0217 00:25:00.784007 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:00 crc kubenswrapper[4805]: I0217 00:25:00.784069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:00 crc kubenswrapper[4805]: I0217 00:25:00.784232 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:00 crc kubenswrapper[4805]: I0217 00:25:00.784311 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:00 crc kubenswrapper[4805]: E0217 00:25:00.784228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:25:00 crc kubenswrapper[4805]: E0217 00:25:00.784505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:25:00 crc kubenswrapper[4805]: E0217 00:25:00.784701 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:25:00 crc kubenswrapper[4805]: E0217 00:25:00.784846 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:25:00 crc kubenswrapper[4805]: I0217 00:25:00.784978 4805 scope.go:117] "RemoveContainer" containerID="dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f" Feb 17 00:25:01 crc kubenswrapper[4805]: I0217 00:25:01.766838 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/1.log" Feb 17 00:25:01 crc kubenswrapper[4805]: I0217 00:25:01.766917 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerStarted","Data":"123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406"} Feb 17 00:25:02 crc kubenswrapper[4805]: I0217 00:25:02.783990 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:02 crc kubenswrapper[4805]: I0217 00:25:02.784026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:02 crc kubenswrapper[4805]: E0217 00:25:02.784186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:25:02 crc kubenswrapper[4805]: I0217 00:25:02.784241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:02 crc kubenswrapper[4805]: I0217 00:25:02.784314 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:02 crc kubenswrapper[4805]: E0217 00:25:02.784515 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:25:02 crc kubenswrapper[4805]: E0217 00:25:02.784671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:25:02 crc kubenswrapper[4805]: E0217 00:25:02.784788 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:25:04 crc kubenswrapper[4805]: I0217 00:25:04.784760 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:04 crc kubenswrapper[4805]: E0217 00:25:04.785911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 00:25:04 crc kubenswrapper[4805]: I0217 00:25:04.785981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:04 crc kubenswrapper[4805]: I0217 00:25:04.786000 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:04 crc kubenswrapper[4805]: I0217 00:25:04.786090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:04 crc kubenswrapper[4805]: E0217 00:25:04.786245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jnv59" podUID="86b8a270-8cb3-4266-9fe0-3cfd027a9174" Feb 17 00:25:04 crc kubenswrapper[4805]: E0217 00:25:04.786421 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 00:25:04 crc kubenswrapper[4805]: E0217 00:25:04.786525 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.783828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.783860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.783860 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.784142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.786663 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.788079 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.788451 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.788857 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.789231 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 00:25:06 crc kubenswrapper[4805]: I0217 00:25:06.791613 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 00:25:12 crc kubenswrapper[4805]: I0217 00:25:12.756674 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.005848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.040419 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8dtg4"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.041356 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.042102 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.042656 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.043853 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.044249 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.045982 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.046378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.047722 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bb4kv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.047866 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.048293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.048454 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.065438 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.065562 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.065918 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.066887 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.067017 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.067066 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.067146 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.069612 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.069704 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.069696 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.069892 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070000 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070031 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070085 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070154 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070160 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070217 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070252 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070392 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070423 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.070398 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.074048 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.074251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.074649 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.074886 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.074918 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.075105 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.075343 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.075528 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.075705 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.075782 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.077915 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.078080 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.078343 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.078495 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.078640 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.078737 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.085063 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gv6f4"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.086204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.087785 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.087954 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.088240 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.088407 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.089680 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29521440-8tt24"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.090116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.091186 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.091559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.094612 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.094962 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095079 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095198 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095213 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095311 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095396 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095395 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095454 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095404 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095442 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095542 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.095637 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.096734 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.097885 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.098844 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.098969 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.099669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.099761 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.099897 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.100114 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.100117 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.100280 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.100781 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.100991 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.101096 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.101283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.101406 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.104522 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mttrb"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.105005 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.105239 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r5qzl"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.113761 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.113846 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.113953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.114599 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.117071 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.117295 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.117897 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.120532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.120904 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.122392 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.124541 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.124747 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.129944 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gv6f4"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.130908 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tnfnz"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.131734 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.133472 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.134212 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.134472 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.135221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.136427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137182 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkl2\" (UniqueName: \"kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-audit\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-encryption-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137542 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137582 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8sv\" (UniqueName: \"kubernetes.io/projected/33b17555-0aa0-481c-b0e4-23484aa43ba9-kube-api-access-qr8sv\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhdl\" (UniqueName: \"kubernetes.io/projected/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-kube-api-access-hdhdl\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96f4\" (UniqueName: \"kubernetes.io/projected/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-kube-api-access-j96f4\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-config\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-config\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d34bd20a-4947-47af-b757-59246bbda398-serving-cert\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137893 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljvl\" (UniqueName: \"kubernetes.io/projected/325ff293-1021-49e6-9f52-070c38d61359-kube-api-access-cljvl\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkqlv\" (UniqueName: \"kubernetes.io/projected/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-kube-api-access-fkqlv\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.137978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v457b\" (UniqueName: \"kubernetes.io/projected/e791d926-f75f-4056-b7ba-18d3c6474386-kube-api-access-v457b\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-audit-dir\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzbm\" (UniqueName: \"kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138089 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-serving-cert\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-serving-cert\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-client\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138270 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpllg\" (UniqueName: \"kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138588 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r5qzl"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fncc\" (UniqueName: \"kubernetes.io/projected/68462f99-97a8-417d-b4ea-2857e82db19b-kube-api-access-7fncc\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.138830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.148559 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.150087 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tnfnz"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.150600 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.150759 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.150994 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.151292 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.151497 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5dq\" (UniqueName: \"kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-encryption-config\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-policies\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-auth-proxy-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68462f99-97a8-417d-b4ea-2857e82db19b-serving-cert\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e791d926-f75f-4056-b7ba-18d3c6474386-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/325ff293-1021-49e6-9f52-070c38d61359-metrics-tls\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-dir\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152762 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-etcd-client\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-images\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a2e72f-8852-4f46-8585-635698d0bcdb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153006 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153055 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-node-pullsecrets\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-image-import-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153176 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnsh\" (UniqueName: \"kubernetes.io/projected/d34bd20a-4947-47af-b757-59246bbda398-kube-api-access-2qnsh\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a2e72f-8852-4f46-8585-635698d0bcdb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153245 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153369 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/33b17555-0aa0-481c-b0e4-23484aa43ba9-machine-approver-tls\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-config\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-serving-cert\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/68462f99-97a8-417d-b4ea-2857e82db19b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154798 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2rc\" (UniqueName: \"kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-trusted-ca\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbttg\" (UniqueName: \"kubernetes.io/projected/f2a2e72f-8852-4f46-8585-635698d0bcdb-kube-api-access-jbttg\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154874 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4xn\" (UniqueName: \"kubernetes.io/projected/89d182b3-73de-4706-9081-580ff1012a8f-kube-api-access-qf4xn\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.152804 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153682 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.153932 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154031 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155502 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155531 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154159 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154265 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155636 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154367 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154456 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154489 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154557 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154594 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.154627 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155890 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.155969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.156083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.156264 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.158724 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.168371 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.172931 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.172982 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8dtg4"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.172992 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.173407 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.177594 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.178234 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.179838 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mttrb"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.188440 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-8tt24"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.188453 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.188468 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.188685 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.189202 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.189762 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.191278 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9pqmt"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.191575 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.191670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.191826 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.191947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.192121 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nl2qv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.192397 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.192779 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.193124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.193317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.193550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.204413 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.204906 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.205234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.205735 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.205964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.206095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.206397 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.206780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.206876 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.207373 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.208844 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.209555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.211972 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.213557 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.214156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.214208 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.219818 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.224835 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rlklw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.227944 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.229011 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.229609 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.229842 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.230688 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.231090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.232153 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.232925 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.233524 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.235575 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.237948 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.241844 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.243599 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.243846 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.246306 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49hsz"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.247275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.249523 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bb4kv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.250612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.251678 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.252790 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qq794"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.254360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.254516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.255855 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbd4485-3856-461d-9346-c2dee82e9bb0-config\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256474 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtzs\" (UniqueName: \"kubernetes.io/projected/a046e6a8-bd3a-4064-8be5-38fed147bdcf-kube-api-access-hwtzs\") pod \"downloads-7954f5f757-tnfnz\" (UID: \"a046e6a8-bd3a-4064-8be5-38fed147bdcf\") " pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnsh\" (UniqueName: \"kubernetes.io/projected/d34bd20a-4947-47af-b757-59246bbda398-kube-api-access-2qnsh\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256528 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a2e72f-8852-4f46-8585-635698d0bcdb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/33b17555-0aa0-481c-b0e4-23484aa43ba9-machine-approver-tls\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnbq\" (UniqueName: \"kubernetes.io/projected/c89a2f9e-db39-452a-b9ec-02a272ed0943-kube-api-access-dlnbq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256807 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hgp\" (UniqueName: \"kubernetes.io/projected/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-kube-api-access-25hgp\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256906 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-config\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-serving-cert\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256952 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/68462f99-97a8-417d-b4ea-2857e82db19b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.256977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h45c\" (UniqueName: \"kubernetes.io/projected/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-kube-api-access-9h45c\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257021 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-serving-cert\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2rc\" (UniqueName: \"kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-trusted-ca\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbttg\" (UniqueName: \"kubernetes.io/projected/f2a2e72f-8852-4f46-8585-635698d0bcdb-kube-api-access-jbttg\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257111 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvsw\" (UniqueName: \"kubernetes.io/projected/7be6625f-bf67-4d23-a5e7-7be75e356db7-kube-api-access-fzvsw\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257183 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-config\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf4xn\" (UniqueName: \"kubernetes.io/projected/89d182b3-73de-4706-9081-580ff1012a8f-kube-api-access-qf4xn\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be6625f-bf67-4d23-a5e7-7be75e356db7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkl2\" (UniqueName: \"kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-audit\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257362 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-encryption-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257459 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4vld\" (UniqueName: \"kubernetes.io/projected/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-kube-api-access-c4vld\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-client\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8sv\" (UniqueName: \"kubernetes.io/projected/33b17555-0aa0-481c-b0e4-23484aa43ba9-kube-api-access-qr8sv\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdhdl\" (UniqueName: \"kubernetes.io/projected/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-kube-api-access-hdhdl\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257600 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j96f4\" (UniqueName: \"kubernetes.io/projected/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-kube-api-access-j96f4\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-config\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-config\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257654 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d34bd20a-4947-47af-b757-59246bbda398-serving-cert\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cljvl\" (UniqueName: \"kubernetes.io/projected/325ff293-1021-49e6-9f52-070c38d61359-kube-api-access-cljvl\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257709 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkqlv\" (UniqueName: \"kubernetes.io/projected/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-kube-api-access-fkqlv\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v457b\" (UniqueName: \"kubernetes.io/projected/e791d926-f75f-4056-b7ba-18d3c6474386-kube-api-access-v457b\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-service-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-audit-dir\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257806 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbd4485-3856-461d-9346-c2dee82e9bb0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257824 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cbd4485-3856-461d-9346-c2dee82e9bb0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257846 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzbm\" (UniqueName: \"kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-serving-cert\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-serving-cert\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89a2f9e-db39-452a-b9ec-02a272ed0943-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.257991 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-client\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fncc\" (UniqueName: \"kubernetes.io/projected/68462f99-97a8-417d-b4ea-2857e82db19b-kube-api-access-7fncc\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258051 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpllg\" (UniqueName: \"kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258113 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-encryption-config\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5dq\" (UniqueName: \"kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258214 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-policies\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258232 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-auth-proxy-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258285 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68462f99-97a8-417d-b4ea-2857e82db19b-serving-cert\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e791d926-f75f-4056-b7ba-18d3c6474386-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258420 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c89a2f9e-db39-452a-b9ec-02a272ed0943-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258498 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.258567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/325ff293-1021-49e6-9f52-070c38d61359-metrics-tls\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-dir\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be6625f-bf67-4d23-a5e7-7be75e356db7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259439 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvz8b\" (UniqueName: \"kubernetes.io/projected/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-kube-api-access-kvz8b\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-etcd-client\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259553 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259576 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk294\" (UniqueName: \"kubernetes.io/projected/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-kube-api-access-lk294\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259682 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a2e72f-8852-4f46-8585-635698d0bcdb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-images\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259757 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-srv-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.259812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-image-import-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.260014 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.260039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-node-pullsecrets\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.260061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.260862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.261607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a2e72f-8852-4f46-8585-635698d0bcdb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.261734 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.261750 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.262217 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rlklw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.262243 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9pqmt"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.262556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.263217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.263412 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.263451 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.263923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.263969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.264105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-trusted-ca\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.264217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-config\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.264707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.265044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d34bd20a-4947-47af-b757-59246bbda398-config\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.265378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.265469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a2e72f-8852-4f46-8585-635698d0bcdb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.265510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-images\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.265973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.266040 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.266071 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.266241 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.266960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267393 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267693 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/33b17555-0aa0-481c-b0e4-23484aa43ba9-machine-approver-tls\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-policies\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d34bd20a-4947-47af-b757-59246bbda398-serving-cert\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.267941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-image-import-ca\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.268448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.268455 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33b17555-0aa0-481c-b0e4-23484aa43ba9-auth-proxy-config\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.268583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-encryption-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269095 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269193 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-config\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-audit\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-encryption-config\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.269978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.270794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68462f99-97a8-417d-b4ea-2857e82db19b-serving-cert\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271284 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-node-pullsecrets\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-etcd-client\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271670 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271702 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49hsz"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.271870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/68462f99-97a8-417d-b4ea-2857e82db19b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.272208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-audit-dir\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.272244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89d182b3-73de-4706-9081-580ff1012a8f-audit-dir\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.272977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.273197 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-config\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.273230 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.273399 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.273738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.274283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89d182b3-73de-4706-9081-580ff1012a8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.274784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.275025 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.274991 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.275229 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.275669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.276617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.276740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.276958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.276995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.277804 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.278040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.278403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d182b3-73de-4706-9081-580ff1012a8f-serving-cert\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.278731 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.278965 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.278968 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-serving-cert\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.279620 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/325ff293-1021-49e6-9f52-070c38d61359-metrics-tls\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.280024 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.280076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-serving-cert\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.280510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.281000 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xs2qc"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.281141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e791d926-f75f-4056-b7ba-18d3c6474386-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.289223 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.289374 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-etcd-client\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.291145 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l2g7w"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.291275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.293388 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l2g7w"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.293424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xs2qc"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.293539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.296187 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t5w9q"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.297056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.302883 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t5w9q"] Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.304345 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.323819 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.343672 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-client\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-service-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbd4485-3856-461d-9346-c2dee82e9bb0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cbd4485-3856-461d-9346-c2dee82e9bb0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.360962 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89a2f9e-db39-452a-b9ec-02a272ed0943-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be6625f-bf67-4d23-a5e7-7be75e356db7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvz8b\" (UniqueName: \"kubernetes.io/projected/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-kube-api-access-kvz8b\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c89a2f9e-db39-452a-b9ec-02a272ed0943-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk294\" (UniqueName: \"kubernetes.io/projected/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-kube-api-access-lk294\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-srv-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbd4485-3856-461d-9346-c2dee82e9bb0-config\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwtzs\" (UniqueName: \"kubernetes.io/projected/a046e6a8-bd3a-4064-8be5-38fed147bdcf-kube-api-access-hwtzs\") pod \"downloads-7954f5f757-tnfnz\" (UID: \"a046e6a8-bd3a-4064-8be5-38fed147bdcf\") " pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnbq\" (UniqueName: \"kubernetes.io/projected/c89a2f9e-db39-452a-b9ec-02a272ed0943-kube-api-access-dlnbq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25hgp\" (UniqueName: \"kubernetes.io/projected/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-kube-api-access-25hgp\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvsw\" (UniqueName: \"kubernetes.io/projected/7be6625f-bf67-4d23-a5e7-7be75e356db7-kube-api-access-fzvsw\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h45c\" (UniqueName: \"kubernetes.io/projected/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-kube-api-access-9h45c\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-serving-cert\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361640 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-config\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361663 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be6625f-bf67-4d23-a5e7-7be75e356db7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.361792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4vld\" (UniqueName: \"kubernetes.io/projected/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-kube-api-access-c4vld\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.362137 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.362246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cbd4485-3856-461d-9346-c2dee82e9bb0-config\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.363854 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.364255 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cbd4485-3856-461d-9346-c2dee82e9bb0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.374695 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7be6625f-bf67-4d23-a5e7-7be75e356db7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.383945 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.392540 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be6625f-bf67-4d23-a5e7-7be75e356db7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.403075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.422835 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.427841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.443656 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.452567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-config\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.464310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.483207 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.496037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-serving-cert\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.503296 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.515503 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-client\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.523558 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.532780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.543395 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.551866 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-etcd-service-ca\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.564148 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.583194 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.603572 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.623182 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.644584 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.662978 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.683308 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.704280 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.724420 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.742991 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.763935 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.784988 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.805435 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 00:25:13 crc kubenswrapper[4805]: I0217 00:25:13.823985 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.020832 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.021115 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.021268 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.021616 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.021771 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.021811 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.022412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89a2f9e-db39-452a-b9ec-02a272ed0943-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.022480 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.022612 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.025143 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.030668 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.043467 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.056353 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c89a2f9e-db39-452a-b9ec-02a272ed0943-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.063841 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.084131 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.103593 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.125364 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.144246 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.163864 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.183857 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.204119 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.221757 4805 request.go:700] Waited for 1.014082198s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.224819 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.246243 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.264827 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.278225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-srv-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.285358 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.294895 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-profile-collector-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.296653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.304465 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.325128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.343348 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.361814 4805 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.361836 4805 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.361968 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls podName:d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:14.861944872 +0000 UTC m=+140.877754280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls") pod "machine-config-operator-74547568cd-lxb2h" (UID: "d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b") : failed to sync secret cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.362156 4805 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.362414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images podName:d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:14.862396935 +0000 UTC m=+140.878206343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images") pod "machine-config-operator-74547568cd-lxb2h" (UID: "d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b") : failed to sync configmap cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: E0217 00:25:14.362512 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert podName:8a4f45dd-052e-4cc4-b491-ec02b32ea1fa nodeName:}" failed. No retries permitted until 2026-02-17 00:25:14.862481087 +0000 UTC m=+140.878290485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert") pod "catalog-operator-68c6474976-hj2wh" (UID: "8a4f45dd-052e-4cc4-b491-ec02b32ea1fa") : failed to sync secret cache: timed out waiting for the condition Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.364172 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.383828 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.404238 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.427457 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.444669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.484438 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.503988 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.523758 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.544226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.563563 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.584471 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.606525 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.625413 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.653004 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.683460 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.704433 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.723569 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.744304 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.765007 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.785406 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.804734 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.827981 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.844177 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.864680 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.883695 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.883875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.884052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.885015 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.885065 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-images\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.890207 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-proxy-tls\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.897903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-srv-cert\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.905150 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.923915 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.944636 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 00:25:14 crc kubenswrapper[4805]: I0217 00:25:14.963656 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.014358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnsh\" (UniqueName: \"kubernetes.io/projected/d34bd20a-4947-47af-b757-59246bbda398-kube-api-access-2qnsh\") pod \"console-operator-58897d9998-mttrb\" (UID: \"d34bd20a-4947-47af-b757-59246bbda398\") " pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.019437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8sv\" (UniqueName: \"kubernetes.io/projected/33b17555-0aa0-481c-b0e4-23484aa43ba9-kube-api-access-qr8sv\") pod \"machine-approver-56656f9798-9q7jv\" (UID: \"33b17555-0aa0-481c-b0e4-23484aa43ba9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.037743 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.046502 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdhdl\" (UniqueName: \"kubernetes.io/projected/fafbbfd8-7e64-432a-b47c-7ad2e9388f2c-kube-api-access-hdhdl\") pod \"machine-api-operator-5694c8668f-bb4kv\" (UID: \"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.061110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j96f4\" (UniqueName: \"kubernetes.io/projected/0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff-kube-api-access-j96f4\") pod \"apiserver-7bbb656c7d-fdxjw\" (UID: \"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.080618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbttg\" (UniqueName: \"kubernetes.io/projected/f2a2e72f-8852-4f46-8585-635698d0bcdb-kube-api-access-jbttg\") pod \"openshift-apiserver-operator-796bbdcf4f-4gcsk\" (UID: \"f2a2e72f-8852-4f46-8585-635698d0bcdb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.102488 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf4xn\" (UniqueName: \"kubernetes.io/projected/89d182b3-73de-4706-9081-580ff1012a8f-kube-api-access-qf4xn\") pod \"apiserver-76f77b778f-8dtg4\" (UID: \"89d182b3-73de-4706-9081-580ff1012a8f\") " pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.119349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5dq\" (UniqueName: \"kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq\") pod \"route-controller-manager-6576b87f9c-xvrjn\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.140663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljvl\" (UniqueName: \"kubernetes.io/projected/325ff293-1021-49e6-9f52-070c38d61359-kube-api-access-cljvl\") pod \"dns-operator-744455d44c-r5qzl\" (UID: \"325ff293-1021-49e6-9f52-070c38d61359\") " pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.168429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkl2\" (UniqueName: \"kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2\") pod \"controller-manager-879f6c89f-lst4d\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.170807 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.184001 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.186353 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkqlv\" (UniqueName: \"kubernetes.io/projected/3a33b46d-a64e-4203-b3e0-ec9dc169c9d8-kube-api-access-fkqlv\") pod \"authentication-operator-69f744f599-gv6f4\" (UID: \"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.198289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.204426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.209130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v457b\" (UniqueName: \"kubernetes.io/projected/e791d926-f75f-4056-b7ba-18d3c6474386-kube-api-access-v457b\") pod \"cluster-samples-operator-665b6dd947-bs5g6\" (UID: \"e791d926-f75f-4056-b7ba-18d3c6474386\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.211694 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.222156 4805 request.go:700] Waited for 1.94999954s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/pruner/token Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.237751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.238294 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzbm\" (UniqueName: \"kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm\") pod \"console-f9d7485db-t9l4h\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.244616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpllg\" (UniqueName: \"kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg\") pod \"image-pruner-29521440-8tt24\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.253943 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.277443 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fncc\" (UniqueName: \"kubernetes.io/projected/68462f99-97a8-417d-b4ea-2857e82db19b-kube-api-access-7fncc\") pod \"openshift-config-operator-7777fb866f-v6mwk\" (UID: \"68462f99-97a8-417d-b4ea-2857e82db19b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.277703 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.279262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2rc\" (UniqueName: \"kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc\") pod \"oauth-openshift-558db77b4-b4l7s\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.285476 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.290350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.305042 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.311473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.313729 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mttrb"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.319728 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.323826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.325539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.350905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.355030 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.364130 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.384267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.404861 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.424990 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.448075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.464036 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.503822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cbd4485-3856-461d-9346-c2dee82e9bb0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5984z\" (UID: \"2cbd4485-3856-461d-9346-c2dee82e9bb0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.521825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvz8b\" (UniqueName: \"kubernetes.io/projected/d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b-kube-api-access-kvz8b\") pod \"machine-config-operator-74547568cd-lxb2h\" (UID: \"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.540969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk294\" (UniqueName: \"kubernetes.io/projected/8a4f45dd-052e-4cc4-b491-ec02b32ea1fa-kube-api-access-lk294\") pod \"catalog-operator-68c6474976-hj2wh\" (UID: \"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.564836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.572745 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwtzs\" (UniqueName: \"kubernetes.io/projected/a046e6a8-bd3a-4064-8be5-38fed147bdcf-kube-api-access-hwtzs\") pod \"downloads-7954f5f757-tnfnz\" (UID: \"a046e6a8-bd3a-4064-8be5-38fed147bdcf\") " pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.581585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvsw\" (UniqueName: \"kubernetes.io/projected/7be6625f-bf67-4d23-a5e7-7be75e356db7-kube-api-access-fzvsw\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfhbv\" (UID: \"7be6625f-bf67-4d23-a5e7-7be75e356db7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.597412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25hgp\" (UniqueName: \"kubernetes.io/projected/5d3c99c6-7195-427e-8cd4-f484ad5ee41c-kube-api-access-25hgp\") pod \"control-plane-machine-set-operator-78cbb6b69f-w8ppr\" (UID: \"5d3c99c6-7195-427e-8cd4-f484ad5ee41c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.626509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h45c\" (UniqueName: \"kubernetes.io/projected/a6bf4e6f-13c5-4276-8124-fdac5ce68cd6-kube-api-access-9h45c\") pod \"olm-operator-6b444d44fb-kr7f6\" (UID: \"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.641462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4vld\" (UniqueName: \"kubernetes.io/projected/40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0-kube-api-access-c4vld\") pod \"etcd-operator-b45778765-9pqmt\" (UID: \"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.657945 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnbq\" (UniqueName: \"kubernetes.io/projected/c89a2f9e-db39-452a-b9ec-02a272ed0943-kube-api-access-dlnbq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8s7qc\" (UID: \"c89a2f9e-db39-452a-b9ec-02a272ed0943\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.658857 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702446 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-service-ca-bundle\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702495 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-proxy-tls\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702580 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-metrics-tls\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn88w\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702698 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh52b\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-kube-api-access-bh52b\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19c7b0a4-a389-48ec-90d2-766e8891a87b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r8tr\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-kube-api-access-8r8tr\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702772 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhs2d\" (UniqueName: \"kubernetes.io/projected/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-kube-api-access-vhs2d\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702788 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-default-certificate\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-stats-auth\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmddf\" (UniqueName: \"kubernetes.io/projected/2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a-kube-api-access-xmddf\") pod \"migrator-59844c95c7-lnqjx\" (UID: \"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702883 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-config\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.702996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-trusted-ca\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.703012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gpq\" (UniqueName: \"kubernetes.io/projected/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-kube-api-access-f5gpq\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.703027 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-metrics-certs\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.703050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.703065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19c7b0a4-a389-48ec-90d2-766e8891a87b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: E0217 00:25:15.703402 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.203391575 +0000 UTC m=+142.219200973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.703622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.715086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.721644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.725387 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8dtg4"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.727973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.734876 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" Feb 17 00:25:15 crc kubenswrapper[4805]: W0217 00:25:15.741954 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ca278b_8fb7_4658_a073_e8aefda92bed.slice/crio-a30789f092088fc2497aaec3c78d7d774e6241028f37f4afd6356f887835ebdd WatchSource:0}: Error finding container a30789f092088fc2497aaec3c78d7d774e6241028f37f4afd6356f887835ebdd: Status 404 returned error can't find the container with id a30789f092088fc2497aaec3c78d7d774e6241028f37f4afd6356f887835ebdd Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.743301 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.785841 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bb4kv"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.786144 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.786154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gv6f4"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.787089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803558 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-registration-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjcs\" (UniqueName: \"kubernetes.io/projected/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-kube-api-access-rzjcs\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc20b905-c7f8-491a-8311-f7a7107d05b1-config-volume\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8dca48-97f6-4af6-a4bf-38d2a5571501-serving-cert\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmddf\" (UniqueName: \"kubernetes.io/projected/2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a-kube-api-access-xmddf\") pod \"migrator-59844c95c7-lnqjx\" (UID: \"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-webhook-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-certs\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.803956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rnbr\" (UniqueName: \"kubernetes.io/projected/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-kube-api-access-6rnbr\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804042 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44jz\" (UniqueName: \"kubernetes.io/projected/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-kube-api-access-n44jz\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6qwt\" (UniqueName: \"kubernetes.io/projected/53331bd6-ce61-4cf5-a403-34b55ba2fed0-kube-api-access-f6qwt\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-cert\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-config\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-socket-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9js\" (UniqueName: \"kubernetes.io/projected/3b8dca48-97f6-4af6-a4bf-38d2a5571501-kube-api-access-wt9js\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804153 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-key\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804193 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48j2\" (UniqueName: \"kubernetes.io/projected/c723bdca-c9ea-41de-b364-5d0ea1915909-kube-api-access-z48j2\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8dca48-97f6-4af6-a4bf-38d2a5571501-config\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-trusted-ca\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804339 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gpq\" (UniqueName: \"kubernetes.io/projected/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-kube-api-access-f5gpq\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgz7h\" (UniqueName: \"kubernetes.io/projected/b99d11ab-7f68-4b1c-82d1-4afa367335ac-kube-api-access-hgz7h\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-metrics-certs\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804420 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19c7b0a4-a389-48ec-90d2-766e8891a87b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc20b905-c7f8-491a-8311-f7a7107d05b1-metrics-tls\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c723bdca-c9ea-41de-b364-5d0ea1915909-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804488 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cf51326-66d4-4091-be72-bade050afd5d-tmpfs\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-service-ca-bundle\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-mountpoint-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-plugins-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804596 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804611 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlq9p\" (UniqueName: \"kubernetes.io/projected/fc20b905-c7f8-491a-8311-f7a7107d05b1-kube-api-access-qlq9p\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-proxy-tls\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804685 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzhsg\" (UniqueName: \"kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-metrics-tls\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-node-bootstrap-token\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn88w\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804846 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804860 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-apiservice-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804876 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-cabundle\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh52b\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-kube-api-access-bh52b\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.804954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npsz\" (UniqueName: \"kubernetes.io/projected/0cf51326-66d4-4091-be72-bade050afd5d-kube-api-access-9npsz\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19c7b0a4-a389-48ec-90d2-766e8891a87b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805019 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r8tr\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-kube-api-access-8r8tr\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805035 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-csi-data-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805095 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhs2d\" (UniqueName: \"kubernetes.io/projected/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-kube-api-access-vhs2d\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-default-certificate\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805162 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-stats-auth\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.805198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69sf\" (UniqueName: \"kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: E0217 00:25:15.805304 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.305289513 +0000 UTC m=+142.321098911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.807578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.809606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.809839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.811565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-service-ca-bundle\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.824041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.824148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-metrics-tls\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.824511 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.824773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.826377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-config\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.827670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.828083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-default-certificate\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.828263 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19c7b0a4-a389-48ec-90d2-766e8891a87b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.828720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.829197 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.830405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.832058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-stats-auth\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.837234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" event={"ID":"34ca278b-8fb7-4658-a073-e8aefda92bed","Type":"ContainerStarted","Data":"a30789f092088fc2497aaec3c78d7d774e6241028f37f4afd6356f887835ebdd"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.840875 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-proxy-tls\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.844107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.845138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" event={"ID":"33b17555-0aa0-481c-b0e4-23484aa43ba9","Type":"ContainerStarted","Data":"5619347e04f3bcea4433c185324a1843787ebdaf6c2401029dd3782f3294ca17"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.845181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" event={"ID":"33b17555-0aa0-481c-b0e4-23484aa43ba9","Type":"ContainerStarted","Data":"992ca3a508b62658ed4c5fac6fc8b0972f4a554d8c9274ebb970848a7e075b72"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.845434 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-trusted-ca\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.845498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/19c7b0a4-a389-48ec-90d2-766e8891a87b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.847798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmddf\" (UniqueName: \"kubernetes.io/projected/2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a-kube-api-access-xmddf\") pod \"migrator-59844c95c7-lnqjx\" (UID: \"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.853985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.863060 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-metrics-certs\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.863520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" event={"ID":"68bd2261-de7d-47ae-a688-59fa77073077","Type":"ContainerStarted","Data":"d8f2db17c779db6734e78f8adb7ab9fa1ae4bb6419b4b5d730289b3e34c17d14"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.872186 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn88w\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.876595 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.879571 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-8tt24"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.881547 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.881697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" event={"ID":"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff","Type":"ContainerStarted","Data":"bcf71492587c045c7caa0aae220eb4fb2b075c35648615ba74a9ca7e550c29d6"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.886041 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.887141 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r5qzl"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.898687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r8tr\" (UniqueName: \"kubernetes.io/projected/2a8b14bb-d777-4c34-9476-1d01f5cb0b99-kube-api-access-8r8tr\") pod \"ingress-operator-5b745b69d9-t8xts\" (UID: \"2a8b14bb-d777-4c34-9476-1d01f5cb0b99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" event={"ID":"89d182b3-73de-4706-9081-580ff1012a8f","Type":"ContainerStarted","Data":"9d7da95f7f80f1d29324c55c0c8cfd4a918eac5c1114dcd07a672ae0372ce5e9"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n44jz\" (UniqueName: \"kubernetes.io/projected/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-kube-api-access-n44jz\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6qwt\" (UniqueName: \"kubernetes.io/projected/53331bd6-ce61-4cf5-a403-34b55ba2fed0-kube-api-access-f6qwt\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906624 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-cert\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9js\" (UniqueName: \"kubernetes.io/projected/3b8dca48-97f6-4af6-a4bf-38d2a5571501-kube-api-access-wt9js\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-socket-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-key\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z48j2\" (UniqueName: \"kubernetes.io/projected/c723bdca-c9ea-41de-b364-5d0ea1915909-kube-api-access-z48j2\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8dca48-97f6-4af6-a4bf-38d2a5571501-config\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgz7h\" (UniqueName: \"kubernetes.io/projected/b99d11ab-7f68-4b1c-82d1-4afa367335ac-kube-api-access-hgz7h\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc20b905-c7f8-491a-8311-f7a7107d05b1-metrics-tls\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906820 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c723bdca-c9ea-41de-b364-5d0ea1915909-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cf51326-66d4-4091-be72-bade050afd5d-tmpfs\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-mountpoint-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-plugins-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.906960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlq9p\" (UniqueName: \"kubernetes.io/projected/fc20b905-c7f8-491a-8311-f7a7107d05b1-kube-api-access-qlq9p\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzhsg\" (UniqueName: \"kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-node-bootstrap-token\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907092 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-apiservice-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-cabundle\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npsz\" (UniqueName: \"kubernetes.io/projected/0cf51326-66d4-4091-be72-bade050afd5d-kube-api-access-9npsz\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.907192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-csi-data-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69sf\" (UniqueName: \"kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-registration-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjcs\" (UniqueName: \"kubernetes.io/projected/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-kube-api-access-rzjcs\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8dca48-97f6-4af6-a4bf-38d2a5571501-serving-cert\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc20b905-c7f8-491a-8311-f7a7107d05b1-config-volume\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-webhook-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-certs\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.908442 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rnbr\" (UniqueName: \"kubernetes.io/projected/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-kube-api-access-6rnbr\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.911095 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b8dca48-97f6-4af6-a4bf-38d2a5571501-config\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.911174 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-mountpoint-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.911559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cf51326-66d4-4091-be72-bade050afd5d-tmpfs\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: E0217 00:25:15.911898 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.411883241 +0000 UTC m=+142.427692639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.912664 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-plugins-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.912754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-socket-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.912819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-csi-data-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.912991 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-registration-dir\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.914590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-key\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.914804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/53331bd6-ce61-4cf5-a403-34b55ba2fed0-signing-cabundle\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.914812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mttrb" event={"ID":"d34bd20a-4947-47af-b757-59246bbda398","Type":"ContainerStarted","Data":"cbcff21ff1979020c775a0405ad4f0048321446dc92ded6a9fbebe179ab0a0c0"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.914855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mttrb" event={"ID":"d34bd20a-4947-47af-b757-59246bbda398","Type":"ContainerStarted","Data":"ce8abcc50c5de7eedcc2914cf767a90157a4feb99845fdc3c0825cc4e5de73c4"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.915450 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.915780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc20b905-c7f8-491a-8311-f7a7107d05b1-config-volume\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.917581 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.918583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c723bdca-c9ea-41de-b364-5d0ea1915909-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.918647 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-mttrb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.918739 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mttrb" podUID="d34bd20a-4947-47af-b757-59246bbda398" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.919310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc20b905-c7f8-491a-8311-f7a7107d05b1-metrics-tls\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.919464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.920149 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-apiservice-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.920690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.920799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.921548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-node-bootstrap-token\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.923354 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b99d11ab-7f68-4b1c-82d1-4afa367335ac-certs\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.923575 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-cert\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.923615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.924680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" event={"ID":"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c","Type":"ContainerStarted","Data":"158d42274c2e5bc3274221488c3848a7375363fb0dea81f7bd26493a2cc5d4b2"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.924908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b8dca48-97f6-4af6-a4bf-38d2a5571501-serving-cert\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.929255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" event={"ID":"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8","Type":"ContainerStarted","Data":"9bb6614e00d1016ea8a3ab77ce1337289f7926f178cd742f0e25042a7a7cffd1"} Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.929716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.931491 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cf51326-66d4-4091-be72-bade050afd5d-webhook-cert\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.950233 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.956531 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:25:15 crc kubenswrapper[4805]: I0217 00:25:15.993469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dd6sw\" (UID: \"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.002085 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.006583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhs2d\" (UniqueName: \"kubernetes.io/projected/9f1342ea-63a1-446c-9b69-c3c0f5c4adc0-kube-api-access-vhs2d\") pod \"machine-config-controller-84d6567774-s56fj\" (UID: \"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.017276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.017430 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.517391906 +0000 UTC m=+142.533201304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.017610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.018890 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.51888111 +0000 UTC m=+142.534690508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.023364 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbba70e9-cfc8-4be6-b1ec-0bb179fcf721-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tlsxw\" (UID: \"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.024117 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.040136 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh52b\" (UniqueName: \"kubernetes.io/projected/19c7b0a4-a389-48ec-90d2-766e8891a87b-kube-api-access-bh52b\") pod \"cluster-image-registry-operator-dc59b4c8b-vb9ng\" (UID: \"19c7b0a4-a389-48ec-90d2-766e8891a87b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.050909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.063815 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gpq\" (UniqueName: \"kubernetes.io/projected/9f87cfb8-eb1e-4bbb-82eb-255544ecdef1-kube-api-access-f5gpq\") pod \"router-default-5444994796-nl2qv\" (UID: \"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1\") " pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.076509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.094690 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.100239 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.103006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzhsg\" (UniqueName: \"kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg\") pod \"collect-profiles-29521455-gxtgv\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.105293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rnbr\" (UniqueName: \"kubernetes.io/projected/b3ec24d0-f900-45ff-a0fb-fb6cd6f24324-kube-api-access-6rnbr\") pod \"ingress-canary-t5w9q\" (UID: \"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324\") " pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.117168 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.118376 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.118928 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.618907253 +0000 UTC m=+142.634716651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.129874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z48j2\" (UniqueName: \"kubernetes.io/projected/c723bdca-c9ea-41de-b364-5d0ea1915909-kube-api-access-z48j2\") pod \"multus-admission-controller-857f4d67dd-rlklw\" (UID: \"c723bdca-c9ea-41de-b364-5d0ea1915909\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.133868 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.138840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgz7h\" (UniqueName: \"kubernetes.io/projected/b99d11ab-7f68-4b1c-82d1-4afa367335ac-kube-api-access-hgz7h\") pod \"machine-config-server-qq794\" (UID: \"b99d11ab-7f68-4b1c-82d1-4afa367335ac\") " pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.157163 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.174475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9js\" (UniqueName: \"kubernetes.io/projected/3b8dca48-97f6-4af6-a4bf-38d2a5571501-kube-api-access-wt9js\") pod \"service-ca-operator-777779d784-c9zfj\" (UID: \"3b8dca48-97f6-4af6-a4bf-38d2a5571501\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.175263 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.184878 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tnfnz"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.192060 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjcs\" (UniqueName: \"kubernetes.io/projected/60bcdb5c-be8b-4095-b909-0ea48bb3ff18-kube-api-access-rzjcs\") pod \"package-server-manager-789f6589d5-cp7v9\" (UID: \"60bcdb5c-be8b-4095-b909-0ea48bb3ff18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.193165 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.194263 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qq794" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.201208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69sf\" (UniqueName: \"kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf\") pod \"marketplace-operator-79b997595-9lrgh\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.219987 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.220374 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.720359198 +0000 UTC m=+142.736168596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.226775 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t5w9q" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.243137 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlq9p\" (UniqueName: \"kubernetes.io/projected/fc20b905-c7f8-491a-8311-f7a7107d05b1-kube-api-access-qlq9p\") pod \"dns-default-l2g7w\" (UID: \"fc20b905-c7f8-491a-8311-f7a7107d05b1\") " pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.244761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n44jz\" (UniqueName: \"kubernetes.io/projected/7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d-kube-api-access-n44jz\") pod \"csi-hostpathplugin-xs2qc\" (UID: \"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d\") " pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.262825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6qwt\" (UniqueName: \"kubernetes.io/projected/53331bd6-ce61-4cf5-a403-34b55ba2fed0-kube-api-access-f6qwt\") pod \"service-ca-9c57cc56f-49hsz\" (UID: \"53331bd6-ce61-4cf5-a403-34b55ba2fed0\") " pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:16 crc kubenswrapper[4805]: W0217 00:25:16.284380 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cbd4485_3856_461d_9346_c2dee82e9bb0.slice/crio-c086853af1563bef5514dd1b3c2e4ea136a216589134bd4895e557b5b42f2c57 WatchSource:0}: Error finding container c086853af1563bef5514dd1b3c2e4ea136a216589134bd4895e557b5b42f2c57: Status 404 returned error can't find the container with id c086853af1563bef5514dd1b3c2e4ea136a216589134bd4895e557b5b42f2c57 Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.317717 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.322942 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.323561 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.823539214 +0000 UTC m=+142.839348612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.340675 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npsz\" (UniqueName: \"kubernetes.io/projected/0cf51326-66d4-4091-be72-bade050afd5d-kube-api-access-9npsz\") pod \"packageserver-d55dfcdfc-xp7wl\" (UID: \"0cf51326-66d4-4091-be72-bade050afd5d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.362769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.434334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.435120 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:16.935094679 +0000 UTC m=+142.950904087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.442839 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.450650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.466441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.467729 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.480680 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.494885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9pqmt"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.500863 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.511089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.512961 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.520483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.537721 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.538498 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.038480021 +0000 UTC m=+143.054289409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.639644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.639967 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.139955207 +0000 UTC m=+143.155764605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.644476 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.678504 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.680782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.695401 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.696009 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mttrb" podStartSLOduration=121.695990107 podStartE2EDuration="2m1.695990107s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:16.69104257 +0000 UTC m=+142.706851988" watchObservedRunningTime="2026-02-17 00:25:16.695990107 +0000 UTC m=+142.711799505" Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.740355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.740813 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.240792154 +0000 UTC m=+143.256601552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.740921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.741263 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.241257468 +0000 UTC m=+143.257066866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.748163 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw"] Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.776973 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h"] Feb 17 00:25:16 crc kubenswrapper[4805]: W0217 00:25:16.837864 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5ff92a2_7d3c_4175_b7d4_1b2dd71d3c4b.slice/crio-82fc663b45af97b22bfb165a53cde4ddda0106101691d78d1dc3cff002ccebda WatchSource:0}: Error finding container 82fc663b45af97b22bfb165a53cde4ddda0106101691d78d1dc3cff002ccebda: Status 404 returned error can't find the container with id 82fc663b45af97b22bfb165a53cde4ddda0106101691d78d1dc3cff002ccebda Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.842434 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.842809 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.342784265 +0000 UTC m=+143.358593813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.959426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:16 crc kubenswrapper[4805]: E0217 00:25:16.960160 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.460144711 +0000 UTC m=+143.475954109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.975945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" event={"ID":"c89a2f9e-db39-452a-b9ec-02a272ed0943","Type":"ContainerStarted","Data":"861a7295a6260802edadc42b5320ae1d7bf98fc1753e95443d511a3fd026c68a"} Feb 17 00:25:16 crc kubenswrapper[4805]: I0217 00:25:16.983722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" event={"ID":"33b17555-0aa0-481c-b0e4-23484aa43ba9","Type":"ContainerStarted","Data":"e43251ebf3bf1c3fc181db3ed66f6976216b1e1b17966e725c53cc3e36eac000"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.001062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" event={"ID":"68bd2261-de7d-47ae-a688-59fa77073077","Type":"ContainerStarted","Data":"861596bbab028c22deb93c7ba6a4acd2a7f5960698794a942c8cf431e2ddb6f7"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.002066 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.003609 4805 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xvrjn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.003642 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" podUID="68bd2261-de7d-47ae-a688-59fa77073077" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.006154 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff" containerID="0e464bbcfc4c5d27de4f0559883927cf22c1c65b62c6cd5d604898bfc207e56e" exitCode=0 Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.006214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" event={"ID":"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff","Type":"ContainerDied","Data":"0e464bbcfc4c5d27de4f0559883927cf22c1c65b62c6cd5d604898bfc207e56e"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.010375 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.015707 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.016535 4805 generic.go:334] "Generic (PLEG): container finished" podID="89d182b3-73de-4706-9081-580ff1012a8f" containerID="17e0c89c734d522cf718bbc34942febc597f070da877e8a9e5f93e043a67e2a1" exitCode=0 Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.016814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" event={"ID":"89d182b3-73de-4706-9081-580ff1012a8f","Type":"ContainerDied","Data":"17e0c89c734d522cf718bbc34942febc597f070da877e8a9e5f93e043a67e2a1"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.018634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" event={"ID":"2cbd4485-3856-461d-9346-c2dee82e9bb0","Type":"ContainerStarted","Data":"c086853af1563bef5514dd1b3c2e4ea136a216589134bd4895e557b5b42f2c57"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.021656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rlklw"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.021909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9l4h" event={"ID":"24781b06-2cc6-49d0-a506-b992048e1c84","Type":"ContainerStarted","Data":"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.021949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9l4h" event={"ID":"24781b06-2cc6-49d0-a506-b992048e1c84","Type":"ContainerStarted","Data":"4ea166231cba90eb0b12bb5c116e413a7580c288ba54f5804ffa58e4bb59dcab"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.023674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" event={"ID":"2a8b14bb-d777-4c34-9476-1d01f5cb0b99","Type":"ContainerStarted","Data":"6c04bf11fde7e2d74e964fb7d7cf157b4aeeded2b49d828a3297a171b96314a0"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.024643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" event={"ID":"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c","Type":"ContainerStarted","Data":"028c300b66512aa3ca73c81e7ca8d20c66e3e839217abd96da8b01b79bc91230"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.024666 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" event={"ID":"fafbbfd8-7e64-432a-b47c-7ad2e9388f2c","Type":"ContainerStarted","Data":"4d2d58d0144c5775d0a68487634d3bc1552d30228a943cbf8bf4a04308cfb8a1"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.025534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" event={"ID":"e791d926-f75f-4056-b7ba-18d3c6474386","Type":"ContainerStarted","Data":"7296d9c9bb1babbc7eab7df3e8d854a34f6f74ace07af6fe7f42545e06775831"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.026107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" event={"ID":"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0","Type":"ContainerStarted","Data":"f803822cdbe7a16804f7a2c7d6db9281f589865549ea57c77f0d827282ac197a"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.027085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" event={"ID":"325ff293-1021-49e6-9f52-070c38d61359","Type":"ContainerStarted","Data":"fec413dd48bfddb0b24ba8b9f1afc65a837247045c51160ab07e5ab7bfd741e2"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.027108 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" event={"ID":"325ff293-1021-49e6-9f52-070c38d61359","Type":"ContainerStarted","Data":"7181434aae5ffb29b69737036f9cd342c184bb8309a834e4f25775184e5a5b58"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.028396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tnfnz" event={"ID":"a046e6a8-bd3a-4064-8be5-38fed147bdcf","Type":"ContainerStarted","Data":"5f6c4f45378f59c7658af94d775ec375608a0232e90b624e6ecc1152ac3e4181"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.028419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tnfnz" event={"ID":"a046e6a8-bd3a-4064-8be5-38fed147bdcf","Type":"ContainerStarted","Data":"b3a0993b9089924beb7cea7ceb580055bb4de1541f96deaa581cc8f1a6f11100"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.029186 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.029819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" event={"ID":"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6","Type":"ContainerStarted","Data":"0b2453f9caa8432e49fba9452e3af45c81fc8aaee56ffc342d72ce99380e14b0"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.030892 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnfnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.030922 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnfnz" podUID="a046e6a8-bd3a-4064-8be5-38fed147bdcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.032991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" event={"ID":"bf20469d-03a9-4939-841d-3c7d28b75aab","Type":"ContainerStarted","Data":"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.033023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" event={"ID":"bf20469d-03a9-4939-841d-3c7d28b75aab","Type":"ContainerStarted","Data":"a656fe2cf919830ebf9ccf2edd36c202a4007b24b7205357413f95e2686c3913"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.033946 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.040229 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b4l7s container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.040285 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.048241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" event={"ID":"3a33b46d-a64e-4203-b3e0-ec9dc169c9d8","Type":"ContainerStarted","Data":"cbe5e2d9a01b38db8a6c25051a10fa9403bf3f2ffa8537aad8e5970c9883237c"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.062125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" event={"ID":"7be6625f-bf67-4d23-a5e7-7be75e356db7","Type":"ContainerStarted","Data":"99036c12cc049c012de2f87f886b9fda4685f2c1093c209373fe22716fea542c"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.062823 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.063774 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.56375271 +0000 UTC m=+143.579562108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.068865 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.075393 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" event={"ID":"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f","Type":"ContainerStarted","Data":"e7a22ad0b6902a6ddcae48abbea6e59e343549bbfb09031322dc236c2ca6d755"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.077065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" event={"ID":"f2a2e72f-8852-4f46-8585-635698d0bcdb","Type":"ContainerStarted","Data":"d4d9e8d7480c27cdd0799d1f84eec22a352f3542def3a4691064192532865325"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.077085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" event={"ID":"f2a2e72f-8852-4f46-8585-635698d0bcdb","Type":"ContainerStarted","Data":"79c5dfae78c93de38e6c3671043c43f4d3f4a9b10664af605975ab3455398880"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.078846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" event={"ID":"68462f99-97a8-417d-b4ea-2857e82db19b","Type":"ContainerStarted","Data":"f43b3b9a5b05c9f5a04c89053da842862aa1da00e23a08fee3e13a750bd4e68f"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.078865 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" event={"ID":"68462f99-97a8-417d-b4ea-2857e82db19b","Type":"ContainerStarted","Data":"8a23b8b56e85d38c91a223ffdaf8b712c521c399b04677c1ee5ee4d5a56c27ee"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.080852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" event={"ID":"5d3c99c6-7195-427e-8cd4-f484ad5ee41c","Type":"ContainerStarted","Data":"b056d7cd5d11202c0d24bf821302b4492b658569ef904118aef1e6e89289359b"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.083218 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qq794" event={"ID":"b99d11ab-7f68-4b1c-82d1-4afa367335ac","Type":"ContainerStarted","Data":"0cefcb4502c549ae02b2a7051376d56f2a4a008a4efdcc3ffd13205394078ac0"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.085585 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" event={"ID":"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa","Type":"ContainerStarted","Data":"dbd22b16057271e16fbba3a038362bb7494846f0860d835e1252e35429522368"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.085766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t5w9q"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.086888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" event={"ID":"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721","Type":"ContainerStarted","Data":"a9279d7132ccfa285cf1b62fff17222da623c6026bc38933abf7ee0f749488bd"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.090173 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" event={"ID":"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b","Type":"ContainerStarted","Data":"82fc663b45af97b22bfb165a53cde4ddda0106101691d78d1dc3cff002ccebda"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.092949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" event={"ID":"34ca278b-8fb7-4658-a073-e8aefda92bed","Type":"ContainerStarted","Data":"af208842d60974bce121cbc7b17e4972ad7bdd0850414acab651f14854c685bf"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.093975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.101849 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lst4d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.101893 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.105448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-8tt24" event={"ID":"0ed7bf5a-a6c8-47a3-8e66-0401495250f3","Type":"ContainerStarted","Data":"ae2b8acec10cf8d060bb090cfc76bf537c41996b4b62f0f6d82800173b284262"} Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.105586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-8tt24" event={"ID":"0ed7bf5a-a6c8-47a3-8e66-0401495250f3","Type":"ContainerStarted","Data":"4abc118e4bfd62204c58d81c7e9cba120175cca2291e708a9f46a7a47fe4e36d"} Feb 17 00:25:17 crc kubenswrapper[4805]: W0217 00:25:17.147375 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc723bdca_c9ea_41de_b364_5d0ea1915909.slice/crio-4aba85b25f8240e745616905b8930cabc93d7c50463c8156081de4220a187cba WatchSource:0}: Error finding container 4aba85b25f8240e745616905b8930cabc93d7c50463c8156081de4220a187cba: Status 404 returned error can't find the container with id 4aba85b25f8240e745616905b8930cabc93d7c50463c8156081de4220a187cba Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.165146 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.167414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.66740076 +0000 UTC m=+143.683210158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.172768 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.193413 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mttrb" Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.266580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.268526 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.768508735 +0000 UTC m=+143.784318133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.376658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:17 crc kubenswrapper[4805]: W0217 00:25:17.376974 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c7b0a4_a389_48ec_90d2_766e8891a87b.slice/crio-a8cb805f19837999299a012acfc6b8f06daa014f6f364266c195cf9e2e88817c WatchSource:0}: Error finding container a8cb805f19837999299a012acfc6b8f06daa014f6f364266c195cf9e2e88817c: Status 404 returned error can't find the container with id a8cb805f19837999299a012acfc6b8f06daa014f6f364266c195cf9e2e88817c Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.377001 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.876988319 +0000 UTC m=+143.892797727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.469969 4805 csr.go:261] certificate signing request csr-7qr82 is approved, waiting to be issued Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.477722 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.493968 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:17.993941643 +0000 UTC m=+144.009751051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.494370 4805 csr.go:257] certificate signing request csr-7qr82 is issued Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.550946 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.593316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.594099 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.094085279 +0000 UTC m=+144.109894667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.616456 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.695103 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.695586 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.195548775 +0000 UTC m=+144.211358173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: W0217 00:25:17.706836 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4b82891_39be_4580_8ec1_80e78114ca96.slice/crio-fc434222df0178ef43b3d5a6aad5d0fb5fe6e24dace7fee64b01ec886aaf18a5 WatchSource:0}: Error finding container fc434222df0178ef43b3d5a6aad5d0fb5fe6e24dace7fee64b01ec886aaf18a5: Status 404 returned error can't find the container with id fc434222df0178ef43b3d5a6aad5d0fb5fe6e24dace7fee64b01ec886aaf18a5 Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.781078 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l2g7w"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.783016 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-49hsz"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.797678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.798240 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.298210045 +0000 UTC m=+144.314019433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.908246 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.908623 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.408607195 +0000 UTC m=+144.424416583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.908785 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:17 crc kubenswrapper[4805]: E0217 00:25:17.909012 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.409005457 +0000 UTC m=+144.424814855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.964497 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl"] Feb 17 00:25:17 crc kubenswrapper[4805]: I0217 00:25:17.968736 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9"] Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.009692 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.010070 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.51005352 +0000 UTC m=+144.525862918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.015661 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4gcsk" podStartSLOduration=123.015620275 podStartE2EDuration="2m3.015620275s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.011365829 +0000 UTC m=+144.027175227" watchObservedRunningTime="2026-02-17 00:25:18.015620275 +0000 UTC m=+144.031429673" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.056993 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xs2qc"] Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.111091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.111483 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.611471384 +0000 UTC m=+144.627280782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: W0217 00:25:18.113697 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cf51326_66d4_4091_be72_bade050afd5d.slice/crio-c7401d83c1775d98e61546f079ff4a2b02460effd72ad0cc2ff9a6011c961975 WatchSource:0}: Error finding container c7401d83c1775d98e61546f079ff4a2b02460effd72ad0cc2ff9a6011c961975: Status 404 returned error can't find the container with id c7401d83c1775d98e61546f079ff4a2b02460effd72ad0cc2ff9a6011c961975 Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.128280 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tnfnz" podStartSLOduration=123.128259602 podStartE2EDuration="2m3.128259602s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.125687756 +0000 UTC m=+144.141497164" watchObservedRunningTime="2026-02-17 00:25:18.128259602 +0000 UTC m=+144.144069000" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.145559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" event={"ID":"7be6625f-bf67-4d23-a5e7-7be75e356db7","Type":"ContainerStarted","Data":"a466c6b7c25e0b155cc70eb15d1b978ce03b9d7c6e28a3e0f2fa642cc5b2c68f"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.149888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" event={"ID":"b4b82891-39be-4580-8ec1-80e78114ca96","Type":"ContainerStarted","Data":"fc434222df0178ef43b3d5a6aad5d0fb5fe6e24dace7fee64b01ec886aaf18a5"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.158847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nl2qv" event={"ID":"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1","Type":"ContainerStarted","Data":"975f7ebefb1dd70c2af25e43514ee6bddcab39b46820d99431433f33129292a4"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.184476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" event={"ID":"19c7b0a4-a389-48ec-90d2-766e8891a87b","Type":"ContainerStarted","Data":"a8cb805f19837999299a012acfc6b8f06daa014f6f364266c195cf9e2e88817c"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.197435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" event={"ID":"c89a2f9e-db39-452a-b9ec-02a272ed0943","Type":"ContainerStarted","Data":"f0277ccd488237f5750768d49c27d127a002fbf07246ee333261fd6a74228044"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.210697 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9q7jv" podStartSLOduration=123.210664293 podStartE2EDuration="2m3.210664293s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.203793649 +0000 UTC m=+144.219603047" watchObservedRunningTime="2026-02-17 00:25:18.210664293 +0000 UTC m=+144.226473681" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.211881 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.212014 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.711989152 +0000 UTC m=+144.727798550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.212173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.212567 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.712554559 +0000 UTC m=+144.728363947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.230702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" event={"ID":"2cbd4485-3856-461d-9346-c2dee82e9bb0","Type":"ContainerStarted","Data":"9ae3b7c4d8c30507faf1326a2be2fb9e78d1553517322e90e9a158e6fa42eb0a"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.251563 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bb4kv" podStartSLOduration=123.251531423 podStartE2EDuration="2m3.251531423s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.250187413 +0000 UTC m=+144.265996811" watchObservedRunningTime="2026-02-17 00:25:18.251531423 +0000 UTC m=+144.267340821" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.284400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" event={"ID":"40f0f6cb-cee2-4d8f-a1e1-5279dbbc76d0","Type":"ContainerStarted","Data":"df4c461e0afe3ae201c005267e31e9f167ca2a5e473c7e4fe310f320e944d962"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.290846 4805 generic.go:334] "Generic (PLEG): container finished" podID="68462f99-97a8-417d-b4ea-2857e82db19b" containerID="f43b3b9a5b05c9f5a04c89053da842862aa1da00e23a08fee3e13a750bd4e68f" exitCode=0 Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.290926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" event={"ID":"68462f99-97a8-417d-b4ea-2857e82db19b","Type":"ContainerDied","Data":"f43b3b9a5b05c9f5a04c89053da842862aa1da00e23a08fee3e13a750bd4e68f"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.305093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" event={"ID":"a6bf4e6f-13c5-4276-8124-fdac5ce68cd6","Type":"ContainerStarted","Data":"c8504ebc61837720ed35e3c3b58fc97cdcdfb609a08bd7063a251fb95397291e"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.305744 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.314621 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.316095 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.816078204 +0000 UTC m=+144.831887602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.330240 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kr7f6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.330254 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gv6f4" podStartSLOduration=123.330234803 podStartE2EDuration="2m3.330234803s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.295256357 +0000 UTC m=+144.311065755" watchObservedRunningTime="2026-02-17 00:25:18.330234803 +0000 UTC m=+144.346044201" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.330316 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" podUID="a6bf4e6f-13c5-4276-8124-fdac5ce68cd6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.331921 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" podStartSLOduration=123.331916633 podStartE2EDuration="2m3.331916633s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.330595094 +0000 UTC m=+144.346404492" watchObservedRunningTime="2026-02-17 00:25:18.331916633 +0000 UTC m=+144.347726031" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.347542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qq794" event={"ID":"b99d11ab-7f68-4b1c-82d1-4afa367335ac","Type":"ContainerStarted","Data":"0855ac2d67d202bc25eed58509931aaa1a768a835c028f4ae553d247c9ea6c23"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.349394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" event={"ID":"c723bdca-c9ea-41de-b364-5d0ea1915909","Type":"ContainerStarted","Data":"4aba85b25f8240e745616905b8930cabc93d7c50463c8156081de4220a187cba"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.351282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t5w9q" event={"ID":"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324","Type":"ContainerStarted","Data":"495bcc5d846147c70b65cbb4bed6df1b55474ae9a07512a8f6efa4858882e944"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.353059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" event={"ID":"4208e92a-1970-441e-a265-f7459d384c6f","Type":"ContainerStarted","Data":"61d9a27ac91fc62c64132c69bf7228fe5d3d556044f65cacdcfa3843a3e4aec5"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.354242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" event={"ID":"8a4f45dd-052e-4cc4-b491-ec02b32ea1fa","Type":"ContainerStarted","Data":"4a4fe64036365557f764cacfb5b9e7419ab19b356ba2f5a8b22e77737b06e60b"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.355386 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.365037 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hj2wh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.365104 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" podUID="8a4f45dd-052e-4cc4-b491-ec02b32ea1fa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.371060 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29521440-8tt24" podStartSLOduration=123.371046202 podStartE2EDuration="2m3.371046202s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.368748874 +0000 UTC m=+144.384558272" watchObservedRunningTime="2026-02-17 00:25:18.371046202 +0000 UTC m=+144.386855600" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.371997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" event={"ID":"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0","Type":"ContainerStarted","Data":"71e8df887c724f14f85f58552e61911c53e80e2414046a5678297bd662ca6784"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.404931 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" event={"ID":"3b8dca48-97f6-4af6-a4bf-38d2a5571501","Type":"ContainerStarted","Data":"f5ecc9387e51743893b779fe88ec3473c56b77f5f32bca22c6b0b470fe9a6be4"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.417379 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t9l4h" podStartSLOduration=123.417366884 podStartE2EDuration="2m3.417366884s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.416579811 +0000 UTC m=+144.432389209" watchObservedRunningTime="2026-02-17 00:25:18.417366884 +0000 UTC m=+144.433176282" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.417783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.418087 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:18.918077275 +0000 UTC m=+144.933886673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.473671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" event={"ID":"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b","Type":"ContainerStarted","Data":"1d98503959ea2a455e98389d26a774bbfc73ff1bb7c516473eebb910c9bd2f48"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.511363 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" podStartSLOduration=123.511348088 podStartE2EDuration="2m3.511348088s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.510612736 +0000 UTC m=+144.526422134" watchObservedRunningTime="2026-02-17 00:25:18.511348088 +0000 UTC m=+144.527157486" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.517971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" event={"ID":"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a","Type":"ContainerStarted","Data":"c781a184e655a2a94e525e46a3187a521dbe975899df378027d52b7909de683a"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.520499 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 00:20:17 +0000 UTC, rotation deadline is 2026-12-25 18:15:21.191278036 +0000 UTC Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.520535 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7481h50m2.670747216s for next certificate rotation Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.520541 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.520641 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.020621143 +0000 UTC m=+145.036430581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.520820 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.533944 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.033932757 +0000 UTC m=+145.049742155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.562389 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" podStartSLOduration=123.562373759 podStartE2EDuration="2m3.562373759s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.561823263 +0000 UTC m=+144.577632661" watchObservedRunningTime="2026-02-17 00:25:18.562373759 +0000 UTC m=+144.578183157" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.576661 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" event={"ID":"e791d926-f75f-4056-b7ba-18d3c6474386","Type":"ContainerStarted","Data":"3e983e5537faf48877bfcade0cf12e3529c8d5b349081b36740f64e108bc1c8b"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.597363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2g7w" event={"ID":"fc20b905-c7f8-491a-8311-f7a7107d05b1","Type":"ContainerStarted","Data":"f90f5c9970a4101025d68ebf8f7652d473a593b2c8ee6633ded5bf0407415984"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.598976 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" podStartSLOduration=123.598965923 podStartE2EDuration="2m3.598965923s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.596225382 +0000 UTC m=+144.612034770" watchObservedRunningTime="2026-02-17 00:25:18.598965923 +0000 UTC m=+144.614775321" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.624809 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.625187 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.12517353 +0000 UTC m=+145.140982928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.649169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" event={"ID":"5d3c99c6-7195-427e-8cd4-f484ad5ee41c","Type":"ContainerStarted","Data":"9ee6e895eecce77a75c723bb1665f67b42b840772a626072255e587ccacd8690"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.658498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" event={"ID":"60bcdb5c-be8b-4095-b909-0ea48bb3ff18","Type":"ContainerStarted","Data":"91a54435b444c9ceebe9cb15e2d365285b737aa9e3d5d849d741eed62ec8f5ad"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.669302 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9pqmt" podStartSLOduration=123.669288906 podStartE2EDuration="2m3.669288906s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.634177006 +0000 UTC m=+144.649986404" watchObservedRunningTime="2026-02-17 00:25:18.669288906 +0000 UTC m=+144.685098304" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.671411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" event={"ID":"2a8b14bb-d777-4c34-9476-1d01f5cb0b99","Type":"ContainerStarted","Data":"c9b7b3b9db38687cebd2cae9709592ecc872e234ee21dae4bae193295e46f945"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.696732 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8s7qc" podStartSLOduration=123.696714509 podStartE2EDuration="2m3.696714509s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.671025918 +0000 UTC m=+144.686835306" watchObservedRunningTime="2026-02-17 00:25:18.696714509 +0000 UTC m=+144.712523907" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.723034 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" event={"ID":"53331bd6-ce61-4cf5-a403-34b55ba2fed0","Type":"ContainerStarted","Data":"1c767af9884cceb08c94b67091f5bf3228e0f784613194b3c8a311c9a1855202"} Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.724457 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnfnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.724573 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnfnz" podUID="a046e6a8-bd3a-4064-8be5-38fed147bdcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.729027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.729310 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.229297464 +0000 UTC m=+145.245106862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.744665 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.749672 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.839072 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.839475 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.339453807 +0000 UTC m=+145.355263205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.839810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.845149 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.345135375 +0000 UTC m=+145.360944763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.901687 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qq794" podStartSLOduration=5.90167217 podStartE2EDuration="5.90167217s" podCreationTimestamp="2026-02-17 00:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.900905667 +0000 UTC m=+144.916715065" watchObservedRunningTime="2026-02-17 00:25:18.90167217 +0000 UTC m=+144.917481568" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.903562 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" podStartSLOduration=123.903556586 podStartE2EDuration="2m3.903556586s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.825532794 +0000 UTC m=+144.841342192" watchObservedRunningTime="2026-02-17 00:25:18.903556586 +0000 UTC m=+144.919365984" Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.940683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:18 crc kubenswrapper[4805]: E0217 00:25:18.941099 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.441083567 +0000 UTC m=+145.456892965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:18 crc kubenswrapper[4805]: I0217 00:25:18.970710 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5984z" podStartSLOduration=123.970694894 podStartE2EDuration="2m3.970694894s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.970139338 +0000 UTC m=+144.985948736" watchObservedRunningTime="2026-02-17 00:25:18.970694894 +0000 UTC m=+144.986504292" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.001584 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfhbv" podStartSLOduration=124.001570379 podStartE2EDuration="2m4.001570379s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:18.999783996 +0000 UTC m=+145.015593394" watchObservedRunningTime="2026-02-17 00:25:19.001570379 +0000 UTC m=+145.017379777" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.050442 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.050872 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.550851669 +0000 UTC m=+145.566661147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.117673 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w8ppr" podStartSLOduration=124.117658617 podStartE2EDuration="2m4.117658617s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.058437573 +0000 UTC m=+145.074246981" watchObservedRunningTime="2026-02-17 00:25:19.117658617 +0000 UTC m=+145.133468005" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.155026 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.155187 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.655155618 +0000 UTC m=+145.670965026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.155445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.155775 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.655767736 +0000 UTC m=+145.671577134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.256865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.257502 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.757487009 +0000 UTC m=+145.773296407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.358447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.358787 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.858775979 +0000 UTC m=+145.874585377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.456868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.459389 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.459714 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:19.959698479 +0000 UTC m=+145.975507877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.561167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.561597 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.061580267 +0000 UTC m=+146.077389665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.663657 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.663968 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.163914078 +0000 UTC m=+146.179723476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.664275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.664762 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.164734582 +0000 UTC m=+146.180543980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.766693 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.766924 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.266897748 +0000 UTC m=+146.282707146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.767016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.767313 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.26730168 +0000 UTC m=+146.283111078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.772062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" event={"ID":"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a","Type":"ContainerStarted","Data":"a805af47566594de9c05083e706c84f99baf1ed6523c9c35c5b46cdabb4d7498"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.772130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" event={"ID":"2f3b3979-05ee-4c4b-90a2-1a35e1c34c3a","Type":"ContainerStarted","Data":"d09f289e08b339de0735619ce2653532a2b6c3b6d7c783e510556296164de10a"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.780819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" event={"ID":"68462f99-97a8-417d-b4ea-2857e82db19b","Type":"ContainerStarted","Data":"667e0b4ef415277406d1dd1956017b50b38f0458c61c2b7b94f7916f972172a5"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.781682 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.791437 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lnqjx" podStartSLOduration=124.791420475 podStartE2EDuration="2m4.791420475s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.790210179 +0000 UTC m=+145.806019567" watchObservedRunningTime="2026-02-17 00:25:19.791420475 +0000 UTC m=+145.807229863" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.793761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" event={"ID":"2a8b14bb-d777-4c34-9476-1d01f5cb0b99","Type":"ContainerStarted","Data":"b4040c6a69aae96e3401d5fed2d2dbe8eef47a6c5925469deca5cda3a3964a51"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.799209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" event={"ID":"dbba70e9-cfc8-4be6-b1ec-0bb179fcf721","Type":"ContainerStarted","Data":"42c4984ffb78190a39ff6d3441d62c272fdc02b3750af95625d7c2f7788d344c"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.813434 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" podStartSLOduration=124.813418876 podStartE2EDuration="2m4.813418876s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.81119001 +0000 UTC m=+145.826999408" watchObservedRunningTime="2026-02-17 00:25:19.813418876 +0000 UTC m=+145.829228274" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.818924 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" event={"ID":"0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff","Type":"ContainerStarted","Data":"1aea1dc40b4bb4d32ab595a1fca3fc088e898b25ef7ef27f5e1e44d63b4ba324"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.827991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t5w9q" event={"ID":"b3ec24d0-f900-45ff-a0fb-fb6cd6f24324","Type":"ContainerStarted","Data":"b72bb2d1c4a9b13bfeaa693743c336b7ccfa2b316ac2415066d629ce0d4ca7fe"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.839290 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tlsxw" podStartSLOduration=124.839258232 podStartE2EDuration="2m4.839258232s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.838436727 +0000 UTC m=+145.854246125" watchObservedRunningTime="2026-02-17 00:25:19.839258232 +0000 UTC m=+145.855067630" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.849987 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" event={"ID":"0cf51326-66d4-4091-be72-bade050afd5d","Type":"ContainerStarted","Data":"88e3715343f8d69e257c413bc6b12ea8d840d73708459f55e0700586e4d2894b"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.850031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" event={"ID":"0cf51326-66d4-4091-be72-bade050afd5d","Type":"ContainerStarted","Data":"c7401d83c1775d98e61546f079ff4a2b02460effd72ad0cc2ff9a6011c961975"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.851026 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.861558 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xp7wl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.861644 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" podUID="0cf51326-66d4-4091-be72-bade050afd5d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.867707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.868773 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.368756295 +0000 UTC m=+146.384565693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.889597 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" podStartSLOduration=124.889578272 podStartE2EDuration="2m4.889578272s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.888906712 +0000 UTC m=+145.904716110" watchObservedRunningTime="2026-02-17 00:25:19.889578272 +0000 UTC m=+145.905387670" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.889999 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-t8xts" podStartSLOduration=124.889992284 podStartE2EDuration="2m4.889992284s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.860505071 +0000 UTC m=+145.876314469" watchObservedRunningTime="2026-02-17 00:25:19.889992284 +0000 UTC m=+145.905801692" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.903707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" event={"ID":"d5ff92a2-7d3c-4175-b7d4-1b2dd71d3c4b","Type":"ContainerStarted","Data":"794e3517e9ae49e2f7148e4bea00866c907d49fb670e5a8a2ed125d8eca433ac"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.916564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" event={"ID":"89d182b3-73de-4706-9081-580ff1012a8f","Type":"ContainerStarted","Data":"f45efd948a3e5f9e3bf84ee012c32145969a62b9b573ffdcccde99264c1b3fc6"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.922588 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t5w9q" podStartSLOduration=6.922574119 podStartE2EDuration="6.922574119s" podCreationTimestamp="2026-02-17 00:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.920628172 +0000 UTC m=+145.936437580" watchObservedRunningTime="2026-02-17 00:25:19.922574119 +0000 UTC m=+145.938383528" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.937419 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" podStartSLOduration=124.937402069 podStartE2EDuration="2m4.937402069s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:19.936688648 +0000 UTC m=+145.952498046" watchObservedRunningTime="2026-02-17 00:25:19.937402069 +0000 UTC m=+145.953211477" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.952237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" event={"ID":"b4b82891-39be-4580-8ec1-80e78114ca96","Type":"ContainerStarted","Data":"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.953395 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.954295 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lrgh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.954356 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.964306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nl2qv" event={"ID":"9f87cfb8-eb1e-4bbb-82eb-255544ecdef1","Type":"ContainerStarted","Data":"0bbf9f061494a4ee592643b6c56c93aa7f84cd41e6f4bab2af1d951212f64abe"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.975826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:19 crc kubenswrapper[4805]: E0217 00:25:19.979884 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.479867386 +0000 UTC m=+146.495676784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.995744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" event={"ID":"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0","Type":"ContainerStarted","Data":"68aebd71d1e174e38880d9f09a118d94e657e5a175de5b52f3f070c286f5160e"} Feb 17 00:25:19 crc kubenswrapper[4805]: I0217 00:25:19.995794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" event={"ID":"9f1342ea-63a1-446c-9b69-c3c0f5c4adc0","Type":"ContainerStarted","Data":"c5d4c1bebc61ad03532a0d10e59fb69efda8079429603a7d098a03afd050fb75"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.005679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2g7w" event={"ID":"fc20b905-c7f8-491a-8311-f7a7107d05b1","Type":"ContainerStarted","Data":"e362f72962fc72fe2ad3df95b4279017b8b3947ba874452cfe0170ff10d39ecd"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.032535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" event={"ID":"19c7b0a4-a389-48ec-90d2-766e8891a87b","Type":"ContainerStarted","Data":"8c1d3f845742c4169f5be681f6e2062f1b09674035e59b761ee9020b2b513869"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.052585 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nl2qv" podStartSLOduration=125.05256759 podStartE2EDuration="2m5.05256759s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.051684874 +0000 UTC m=+146.067494272" watchObservedRunningTime="2026-02-17 00:25:20.05256759 +0000 UTC m=+146.068376998" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.053032 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lxb2h" podStartSLOduration=125.053025113 podStartE2EDuration="2m5.053025113s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.005767294 +0000 UTC m=+146.021576692" watchObservedRunningTime="2026-02-17 00:25:20.053025113 +0000 UTC m=+146.068834511" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.057285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" event={"ID":"53331bd6-ce61-4cf5-a403-34b55ba2fed0","Type":"ContainerStarted","Data":"695e96c0f4ff8d9d13317f0aa71e81383bffbc3ae1293e573b060e7c03ceac9e"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.066183 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" event={"ID":"325ff293-1021-49e6-9f52-070c38d61359","Type":"ContainerStarted","Data":"bc0fe2c305edf9d0236bd988521005f15b93214b7e986ddee12309dc5efea607"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.067990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" event={"ID":"c723bdca-c9ea-41de-b364-5d0ea1915909","Type":"ContainerStarted","Data":"0a672f5779dee2590510228a18200084d00f18fb3f12c2b7fdf45fd9a64a1cf5"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.069277 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" event={"ID":"4208e92a-1970-441e-a265-f7459d384c6f","Type":"ContainerStarted","Data":"beed81d7ab906d5fa324cf0365e577715c440f709815693adf560b2f5efad59a"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.070616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" event={"ID":"e791d926-f75f-4056-b7ba-18d3c6474386","Type":"ContainerStarted","Data":"6419cea214ecf960cdd5d1b86e7a61e5860016140396ad52343e909eeea39e1f"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.079923 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.080478 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.580458496 +0000 UTC m=+146.596267894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.082183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.082744 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" podStartSLOduration=125.082734404 podStartE2EDuration="2m5.082734404s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.081526468 +0000 UTC m=+146.097335866" watchObservedRunningTime="2026-02-17 00:25:20.082734404 +0000 UTC m=+146.098543802" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.083173 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.583159616 +0000 UTC m=+146.598969024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.113791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" event={"ID":"60bcdb5c-be8b-4095-b909-0ea48bb3ff18","Type":"ContainerStarted","Data":"fcf2c14ecd7891498d56347d65b4223630c465b7b7217117da19fdd2771bbabb"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.114158 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.115004 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-s56fj" podStartSLOduration=125.114989649 podStartE2EDuration="2m5.114989649s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.113628589 +0000 UTC m=+146.129437987" watchObservedRunningTime="2026-02-17 00:25:20.114989649 +0000 UTC m=+146.130799047" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.119465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" event={"ID":"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d","Type":"ContainerStarted","Data":"504473a6a71cae0c12ddbf1dd54c1e807cbc5de31e2ca23e1a42c8fcf16639ea"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.139420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" event={"ID":"3b8dca48-97f6-4af6-a4bf-38d2a5571501","Type":"ContainerStarted","Data":"15dc68293fada5928d2270e644f22c91ad89ede9717dfd70375da88793e4cd79"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.152604 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" event={"ID":"eb07c0a4-9a2c-4bc0-895f-a9a57fcf730f","Type":"ContainerStarted","Data":"94f7483bbf7ced902b9df5398ce427875b7a5720d75a927e89a2eb802b3c4b13"} Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.152869 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnfnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.152926 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnfnz" podUID="a046e6a8-bd3a-4064-8be5-38fed147bdcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.156183 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kr7f6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.156209 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" podUID="a6bf4e6f-13c5-4276-8124-fdac5ce68cd6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.156255 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hj2wh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.156268 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" podUID="8a4f45dd-052e-4cc4-b491-ec02b32ea1fa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.183145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.184410 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.684394745 +0000 UTC m=+146.700204143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.185088 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.186500 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.190450 4805 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-fdxjw container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.190491 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" podUID="0f8fdd33-cfa1-4e6b-b1a9-fb9a212314ff" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.223879 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" podStartSLOduration=125.223844453 podStartE2EDuration="2m5.223844453s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.218492465 +0000 UTC m=+146.234301863" watchObservedRunningTime="2026-02-17 00:25:20.223844453 +0000 UTC m=+146.239653851" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.225014 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" podStartSLOduration=125.225008358 podStartE2EDuration="2m5.225008358s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.17544422 +0000 UTC m=+146.191253608" watchObservedRunningTime="2026-02-17 00:25:20.225008358 +0000 UTC m=+146.240817756" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.269420 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bs5g6" podStartSLOduration=125.269403313 podStartE2EDuration="2m5.269403313s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.268698322 +0000 UTC m=+146.284507720" watchObservedRunningTime="2026-02-17 00:25:20.269403313 +0000 UTC m=+146.285212711" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.285824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.296723 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.796704962 +0000 UTC m=+146.812514360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.366897 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.371787 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:20 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:20 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:20 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.371846 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.372563 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-49hsz" podStartSLOduration=125.372552358 podStartE2EDuration="2m5.372552358s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.342632952 +0000 UTC m=+146.358442350" watchObservedRunningTime="2026-02-17 00:25:20.372552358 +0000 UTC m=+146.388361756" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.373887 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r5qzl" podStartSLOduration=125.373883448 podStartE2EDuration="2m5.373883448s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.371236309 +0000 UTC m=+146.387045707" watchObservedRunningTime="2026-02-17 00:25:20.373883448 +0000 UTC m=+146.389692846" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.388524 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.388910 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.888894632 +0000 UTC m=+146.904704030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.399755 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vb9ng" podStartSLOduration=125.399734523 podStartE2EDuration="2m5.399734523s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.398785995 +0000 UTC m=+146.414595403" watchObservedRunningTime="2026-02-17 00:25:20.399734523 +0000 UTC m=+146.415543921" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.430982 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dd6sw" podStartSLOduration=125.430954648 podStartE2EDuration="2m5.430954648s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.424356383 +0000 UTC m=+146.440165801" watchObservedRunningTime="2026-02-17 00:25:20.430954648 +0000 UTC m=+146.446764046" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.447268 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" podStartSLOduration=125.447245201 podStartE2EDuration="2m5.447245201s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.444102738 +0000 UTC m=+146.459912136" watchObservedRunningTime="2026-02-17 00:25:20.447245201 +0000 UTC m=+146.463054609" Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.489830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.490391 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:20.990371888 +0000 UTC m=+147.006181276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.591510 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.591788 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.091745821 +0000 UTC m=+147.107555219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.592063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.592422 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.09240636 +0000 UTC m=+147.108215758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.693620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.693903 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.193850965 +0000 UTC m=+147.209660363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.694044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.694473 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.194455033 +0000 UTC m=+147.210264641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.795714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.796131 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.296116814 +0000 UTC m=+147.311926202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:20 crc kubenswrapper[4805]: I0217 00:25:20.897537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:20 crc kubenswrapper[4805]: E0217 00:25:20.897946 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.39790828 +0000 UTC m=+147.413717678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.004200 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.004904 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.504883198 +0000 UTC m=+147.520692606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.105918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.106372 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.606356854 +0000 UTC m=+147.622166252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.167488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" event={"ID":"89d182b3-73de-4706-9081-580ff1012a8f","Type":"ContainerStarted","Data":"d3fd968ca6c821d362fd07a4bf5ab26f5cf88fa66342912e451afcb6a5382ad2"} Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.171702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" event={"ID":"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d","Type":"ContainerStarted","Data":"02092c50bf1f66073fac0443b0f5f1d9bef8c5ffa894a4238d6ab8e9a4f57de1"} Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.176507 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l2g7w" event={"ID":"fc20b905-c7f8-491a-8311-f7a7107d05b1","Type":"ContainerStarted","Data":"fd2882760c55863c7743f2ed63f1711d3562f95363cfcbe6d82367a2730ca2b8"} Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.177180 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.183893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rlklw" event={"ID":"c723bdca-c9ea-41de-b364-5d0ea1915909","Type":"ContainerStarted","Data":"c673f82224598c58712999e119cc063148ee13f6fbd44d3b86f169b311b090a4"} Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.187791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" event={"ID":"60bcdb5c-be8b-4095-b909-0ea48bb3ff18","Type":"ContainerStarted","Data":"fec08314678779097b23b0f3783c19b77e82a1ec9e3b686ba2c8071d5e9a335e"} Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.189343 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xp7wl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.189466 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" podUID="0cf51326-66d4-4091-be72-bade050afd5d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.189746 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9lrgh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.189814 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.206751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.206905 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.706884512 +0000 UTC m=+147.722693910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.207314 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.207602 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.707592393 +0000 UTC m=+147.723401791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.210564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hj2wh" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.308304 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.309731 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.809702747 +0000 UTC m=+147.825512145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.354780 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-c9zfj" podStartSLOduration=126.354764252 podStartE2EDuration="2m6.354764252s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:20.482144944 +0000 UTC m=+146.497954352" watchObservedRunningTime="2026-02-17 00:25:21.354764252 +0000 UTC m=+147.370573650" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.369913 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:21 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:21 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:21 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.369968 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.411045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.411502 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:21.911484362 +0000 UTC m=+147.927293760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.420450 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" podStartSLOduration=126.420427137 podStartE2EDuration="2m6.420427137s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:21.361537033 +0000 UTC m=+147.377346431" watchObservedRunningTime="2026-02-17 00:25:21.420427137 +0000 UTC m=+147.436236535" Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.511941 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.512541 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.012524215 +0000 UTC m=+148.028333613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.613678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.613975 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.11396355 +0000 UTC m=+148.129772948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.721027 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.721192 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.221169415 +0000 UTC m=+148.236978813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.721276 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.721689 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.22168133 +0000 UTC m=+148.237490728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.822276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.822917 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.322902559 +0000 UTC m=+148.338711957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:21 crc kubenswrapper[4805]: I0217 00:25:21.924594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:21 crc kubenswrapper[4805]: E0217 00:25:21.925019 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.424998872 +0000 UTC m=+148.440808270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.025814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.026080 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.526051945 +0000 UTC m=+148.541861343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.026436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.026766 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.526753536 +0000 UTC m=+148.542562934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.127710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.127890 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.627865931 +0000 UTC m=+148.643675329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.127972 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.128419 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.628412177 +0000 UTC m=+148.644221565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.201884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" event={"ID":"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d","Type":"ContainerStarted","Data":"506b9ee23ca126e6f7f7b85107767f7bec0c2ece1b1cc96e874698a87876d125"} Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.201929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" event={"ID":"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d","Type":"ContainerStarted","Data":"5c4ba47d8a807deb2150b1b038d19db545a6d49d10fab166c12ca9330b50003a"} Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.213742 4805 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.228985 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.229186 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.729157161 +0000 UTC m=+148.744966559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.229294 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: E0217 00:25:22.229597 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:25:22.729580804 +0000 UTC m=+148.745390202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-s576k" (UID: "c367e959-10fb-43d9-baf3-31123c06738b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.250686 4805 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T00:25:22.213766215Z","Handler":null,"Name":""} Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.262316 4805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.262402 4805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.330873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.368126 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:22 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:22 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:22 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.368191 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.400718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.432644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.434972 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l2g7w" podStartSLOduration=9.434955327 podStartE2EDuration="9.434955327s" podCreationTimestamp="2026-02-17 00:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:21.485484414 +0000 UTC m=+147.501293802" watchObservedRunningTime="2026-02-17 00:25:22.434955327 +0000 UTC m=+148.450764725" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.436443 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.437381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.443948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.450858 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.517939 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.518001 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.533753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvlhq\" (UniqueName: \"kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.533818 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.533863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.544270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-s576k\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.558004 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v6mwk" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.624550 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.625618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.630785 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.634727 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.635239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.636205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvlhq\" (UniqueName: \"kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.636371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.636691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.656979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.673834 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvlhq\" (UniqueName: \"kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq\") pod \"community-operators-r825t\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.687573 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.737660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq7lx\" (UniqueName: \"kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.737790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.737808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.792275 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.798555 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.799303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.800930 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.801159 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.810133 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.828500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.831593 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.832582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.843951 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.844118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.844198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq7lx\" (UniqueName: \"kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.844954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.846058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.852668 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.917701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq7lx\" (UniqueName: \"kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx\") pod \"certified-operators-jg6vt\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.938185 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xp7wl" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.938524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.950130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.950190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.950208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.950237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:22 crc kubenswrapper[4805]: I0217 00:25:22.950269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twhg\" (UniqueName: \"kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.041606 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.042559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.051164 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.055377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.055433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.055453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.058188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.058592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.069454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.069582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twhg\" (UniqueName: \"kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.070295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.077792 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.077848 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.137642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twhg\" (UniqueName: \"kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg\") pod \"community-operators-gstnk\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.138076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.170686 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hwp\" (UniqueName: \"kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.170810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.171474 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.174196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.174255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.174318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.174373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.174405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.182272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.185862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.186262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.225362 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" event={"ID":"7fa6ce4d-9ef5-48cb-a1ee-31e5bf8b676d","Type":"ContainerStarted","Data":"37f0609df05e45ae8a208d8475cb805c83bfdde4033ff5331ff5ab1e0d99d809"} Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.241525 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.254262 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.262849 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xs2qc" podStartSLOduration=10.262831529 podStartE2EDuration="10.262831529s" podCreationTimestamp="2026-02-17 00:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:23.261562222 +0000 UTC m=+149.277371620" watchObservedRunningTime="2026-02-17 00:25:23.262831529 +0000 UTC m=+149.278640927" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.276101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hwp\" (UniqueName: \"kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.276168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.276199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.276654 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.276754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.311186 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hwp\" (UniqueName: \"kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp\") pod \"certified-operators-bv74b\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.314740 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.334629 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.341402 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.378051 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:23 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:23 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:23 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.378114 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.378140 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.404409 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.413521 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.637144 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.684956 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:25:23 crc kubenswrapper[4805]: I0217 00:25:23.919052 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:25:24 crc kubenswrapper[4805]: W0217 00:25:24.010812 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee6fe5f1_e028_4ff7_9edb_f547d9f7e741.slice/crio-2f32bd46ff9f1eb907008350e0eee123d3a570ae3d49412df0eb033ddabc9658 WatchSource:0}: Error finding container 2f32bd46ff9f1eb907008350e0eee123d3a570ae3d49412df0eb033ddabc9658: Status 404 returned error can't find the container with id 2f32bd46ff9f1eb907008350e0eee123d3a570ae3d49412df0eb033ddabc9658 Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.061002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 00:25:24 crc kubenswrapper[4805]: W0217 00:25:24.072704 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod309beea5_8d21_4125_b1db_5e13ff5605bb.slice/crio-595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9 WatchSource:0}: Error finding container 595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9: Status 404 returned error can't find the container with id 595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9 Feb 17 00:25:24 crc kubenswrapper[4805]: W0217 00:25:24.086449 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cbadb462ba8e4dfa6f801816c356a972682d03a353fa601be8b2b2a4221403e2 WatchSource:0}: Error finding container cbadb462ba8e4dfa6f801816c356a972682d03a353fa601be8b2b2a4221403e2: Status 404 returned error can't find the container with id cbadb462ba8e4dfa6f801816c356a972682d03a353fa601be8b2b2a4221403e2 Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.234043 4805 generic.go:334] "Generic (PLEG): container finished" podID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerID="72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55" exitCode=0 Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.234397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerDied","Data":"72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.234424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerStarted","Data":"0a186dbaf0e3c415867b1eb078026847cb8d1dd66c75920663ccdbb945c7759f"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.250478 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.261064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" event={"ID":"c367e959-10fb-43d9-baf3-31123c06738b","Type":"ContainerStarted","Data":"73f8a906e01d4c190fc76468d8aa9cbcaf34b352f9715bbe5f7dc6c68a157ea1"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.261107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" event={"ID":"c367e959-10fb-43d9-baf3-31123c06738b","Type":"ContainerStarted","Data":"4a4c90b0fa55868d4369febd6d6527a62ef9a3961c11acffdb26edfb2d206550"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.261815 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.265091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bfbd238292d0c464996e17a50be955097dc6687be58f8280e5eb7acc967c0f88"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.268025 4805 generic.go:334] "Generic (PLEG): container finished" podID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerID="bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94" exitCode=0 Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.268084 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerDied","Data":"bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.268103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerStarted","Data":"35cc5ec1dcc48f79e0dff05053d93f9a8d66a1cef7000d9ab472f7a9405b226b"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.269418 4805 generic.go:334] "Generic (PLEG): container finished" podID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerID="0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad" exitCode=0 Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.269502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerDied","Data":"0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.269588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerStarted","Data":"f9d45cb0704f67453ae2f381dcd630de590077c7ca6eae7d51861cc8c95ce4bf"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.272072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e76ed4c218a5eb95e70642fdf1e6dea86e7fffeef4dc92b965cc4597b7e67416"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.279378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cbadb462ba8e4dfa6f801816c356a972682d03a353fa601be8b2b2a4221403e2"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.290474 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerStarted","Data":"2f32bd46ff9f1eb907008350e0eee123d3a570ae3d49412df0eb033ddabc9658"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.293418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"309beea5-8d21-4125-b1db-5e13ff5605bb","Type":"ContainerStarted","Data":"595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9"} Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.312350 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" podStartSLOduration=129.312318116 podStartE2EDuration="2m9.312318116s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:24.285556593 +0000 UTC m=+150.301365991" watchObservedRunningTime="2026-02-17 00:25:24.312318116 +0000 UTC m=+150.328127514" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.369351 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:24 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:24 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:24 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.369408 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.630647 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.632050 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.634562 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.642690 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.698192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.698268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.698319 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4952v\" (UniqueName: \"kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.799579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.799693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.799760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4952v\" (UniqueName: \"kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.800132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.800420 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.824726 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4952v\" (UniqueName: \"kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v\") pod \"redhat-marketplace-fcmd9\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:24 crc kubenswrapper[4805]: I0217 00:25:24.953204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.027122 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.028383 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.042765 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.104266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.104717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.104766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7k64\" (UniqueName: \"kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.171784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.172795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.194194 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.195583 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.201350 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fdxjw" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.205543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.205591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.205654 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7k64\" (UniqueName: \"kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.206448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.206736 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.227922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7k64\" (UniqueName: \"kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64\") pod \"redhat-marketplace-w5xg9\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.233767 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:25:25 crc kubenswrapper[4805]: W0217 00:25:25.237094 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f799a43_6325_4943_8c49_58ad9822eb77.slice/crio-5dc113bbe851b603aed9ca66e739cd07837f01ec56ec25a17e921addba56e243 WatchSource:0}: Error finding container 5dc113bbe851b603aed9ca66e739cd07837f01ec56ec25a17e921addba56e243: Status 404 returned error can't find the container with id 5dc113bbe851b603aed9ca66e739cd07837f01ec56ec25a17e921addba56e243 Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.352187 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.352600 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.355195 4805 generic.go:334] "Generic (PLEG): container finished" podID="4208e92a-1970-441e-a265-f7459d384c6f" containerID="beed81d7ab906d5fa324cf0365e577715c440f709815693adf560b2f5efad59a" exitCode=0 Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.355300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" event={"ID":"4208e92a-1970-441e-a265-f7459d384c6f","Type":"ContainerDied","Data":"beed81d7ab906d5fa324cf0365e577715c440f709815693adf560b2f5efad59a"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.367781 4805 patch_prober.go:28] interesting pod/console-f9d7485db-t9l4h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.367830 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t9l4h" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.373623 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:25 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:25 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:25 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.373676 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.373798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f836c26d265eb577a4123021c58ab78902137f0864c23a5a2ce23e2b8fb995c7"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.386461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.390868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9e221b8564b0f2fa549da5c91c9cbaa9355f7f4eadbafd5c812925384472c860"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.396479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerStarted","Data":"5dc113bbe851b603aed9ca66e739cd07837f01ec56ec25a17e921addba56e243"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.398024 4805 generic.go:334] "Generic (PLEG): container finished" podID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerID="3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326" exitCode=0 Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.398077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerDied","Data":"3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.400586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"309beea5-8d21-4125-b1db-5e13ff5605bb","Type":"ContainerStarted","Data":"c220fe4db1ce1a171419d6047a6b19782a280fb6fbad7576a65d3de270222bbd"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.404367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a1dd750de188faacfb458a8c1d13ff4eedc3f462437583a60f5d02ad3f746d4a"} Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.405078 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.422494 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8dtg4" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.629023 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.630439 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.642537 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.650391 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.660460 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnfnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.660513 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tnfnz" podUID="a046e6a8-bd3a-4064-8be5-38fed147bdcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.660551 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-tnfnz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.660606 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tnfnz" podUID="a046e6a8-bd3a-4064-8be5-38fed147bdcf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.718071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v966\" (UniqueName: \"kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.718145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.718185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.815496 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kr7f6" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.819210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v966\" (UniqueName: \"kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.819278 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.819318 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.819795 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.819835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.857942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v966\" (UniqueName: \"kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966\") pod \"redhat-operators-7hfzb\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.932840 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:25:25 crc kubenswrapper[4805]: I0217 00:25:25.965816 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.041842 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.045217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.054771 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.129629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.129714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsw2w\" (UniqueName: \"kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.129772 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.232949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.233021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsw2w\" (UniqueName: \"kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.233077 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.233907 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.234116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.281391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsw2w\" (UniqueName: \"kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w\") pod \"redhat-operators-mz92r\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.371423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.384583 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:26 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:26 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:26 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.384633 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.409596 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.432141 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.461982 4805 generic.go:334] "Generic (PLEG): container finished" podID="3f799a43-6325-4943-8c49-58ad9822eb77" containerID="3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb" exitCode=0 Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.462063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerDied","Data":"3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb"} Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.471223 4805 generic.go:334] "Generic (PLEG): container finished" podID="309beea5-8d21-4125-b1db-5e13ff5605bb" containerID="c220fe4db1ce1a171419d6047a6b19782a280fb6fbad7576a65d3de270222bbd" exitCode=0 Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.471287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"309beea5-8d21-4125-b1db-5e13ff5605bb","Type":"ContainerDied","Data":"c220fe4db1ce1a171419d6047a6b19782a280fb6fbad7576a65d3de270222bbd"} Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.471892 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.502744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerStarted","Data":"fa3b8dd7ee746544b8ddb24535751b6222d777665a0fed7db62a887380526fa6"} Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.502790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerStarted","Data":"8e47ad67346a861206c3ca6c4a049d07e514209468cd6a3464004d81a4fbda5e"} Feb 17 00:25:26 crc kubenswrapper[4805]: E0217 00:25:26.509864 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4d2c36c_b305_4234_b3aa_31b0c3cd7f77.slice/crio-conmon-fa3b8dd7ee746544b8ddb24535751b6222d777665a0fed7db62a887380526fa6.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.856186 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.927460 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.954801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir\") pod \"309beea5-8d21-4125-b1db-5e13ff5605bb\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.955107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume\") pod \"4208e92a-1970-441e-a265-f7459d384c6f\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.955157 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume\") pod \"4208e92a-1970-441e-a265-f7459d384c6f\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.955186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access\") pod \"309beea5-8d21-4125-b1db-5e13ff5605bb\" (UID: \"309beea5-8d21-4125-b1db-5e13ff5605bb\") " Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.955224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzhsg\" (UniqueName: \"kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg\") pod \"4208e92a-1970-441e-a265-f7459d384c6f\" (UID: \"4208e92a-1970-441e-a265-f7459d384c6f\") " Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.955824 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "309beea5-8d21-4125-b1db-5e13ff5605bb" (UID: "309beea5-8d21-4125-b1db-5e13ff5605bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.958507 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "4208e92a-1970-441e-a265-f7459d384c6f" (UID: "4208e92a-1970-441e-a265-f7459d384c6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.966998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4208e92a-1970-441e-a265-f7459d384c6f" (UID: "4208e92a-1970-441e-a265-f7459d384c6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.971582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg" (OuterVolumeSpecName: "kube-api-access-tzhsg") pod "4208e92a-1970-441e-a265-f7459d384c6f" (UID: "4208e92a-1970-441e-a265-f7459d384c6f"). InnerVolumeSpecName "kube-api-access-tzhsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:25:26 crc kubenswrapper[4805]: I0217 00:25:26.974566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "309beea5-8d21-4125-b1db-5e13ff5605bb" (UID: "309beea5-8d21-4125-b1db-5e13ff5605bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.056837 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/309beea5-8d21-4125-b1db-5e13ff5605bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.056872 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzhsg\" (UniqueName: \"kubernetes.io/projected/4208e92a-1970-441e-a265-f7459d384c6f-kube-api-access-tzhsg\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.056884 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/309beea5-8d21-4125-b1db-5e13ff5605bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.056897 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4208e92a-1970-441e-a265-f7459d384c6f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.056906 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4208e92a-1970-441e-a265-f7459d384c6f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.119898 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.366830 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:27 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:27 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:27 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.367347 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.531040 4805 generic.go:334] "Generic (PLEG): container finished" podID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerID="fa3b8dd7ee746544b8ddb24535751b6222d777665a0fed7db62a887380526fa6" exitCode=0 Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.531134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerDied","Data":"fa3b8dd7ee746544b8ddb24535751b6222d777665a0fed7db62a887380526fa6"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.552599 4805 generic.go:334] "Generic (PLEG): container finished" podID="2342db7f-2c3a-431e-a891-e844a7284298" containerID="c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d" exitCode=0 Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.552938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerDied","Data":"c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.553007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerStarted","Data":"3c19a3d3e349c6cecc717a461f6a2fe69e45d2e54e565cb34133c25bac04874b"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.574987 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" event={"ID":"4208e92a-1970-441e-a265-f7459d384c6f","Type":"ContainerDied","Data":"61d9a27ac91fc62c64132c69bf7228fe5d3d556044f65cacdcfa3843a3e4aec5"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.575028 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d9a27ac91fc62c64132c69bf7228fe5d3d556044f65cacdcfa3843a3e4aec5" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.575107 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.604582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"309beea5-8d21-4125-b1db-5e13ff5605bb","Type":"ContainerDied","Data":"595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.604618 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595c5ab869c6a2e1d8930e434ad33f50f77fdebaec975791e8222446af300ff9" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.604681 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.616645 4805 generic.go:334] "Generic (PLEG): container finished" podID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerID="e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e" exitCode=0 Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.617656 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerDied","Data":"e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e"} Feb 17 00:25:27 crc kubenswrapper[4805]: I0217 00:25:27.617682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerStarted","Data":"6cc45798fe0e7e7e0c769e7710cf24476cc70035e933cfa03231f733d3436917"} Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.365220 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:28 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:28 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:28 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.365283 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.518809 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 00:25:28 crc kubenswrapper[4805]: E0217 00:25:28.519025 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4208e92a-1970-441e-a265-f7459d384c6f" containerName="collect-profiles" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.519038 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4208e92a-1970-441e-a265-f7459d384c6f" containerName="collect-profiles" Feb 17 00:25:28 crc kubenswrapper[4805]: E0217 00:25:28.519050 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="309beea5-8d21-4125-b1db-5e13ff5605bb" containerName="pruner" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.519056 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="309beea5-8d21-4125-b1db-5e13ff5605bb" containerName="pruner" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.519148 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="309beea5-8d21-4125-b1db-5e13ff5605bb" containerName="pruner" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.519159 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4208e92a-1970-441e-a265-f7459d384c6f" containerName="collect-profiles" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.519526 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.522209 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.523418 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.525951 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.591444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.593221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.694809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.694865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.694958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.713548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:28 crc kubenswrapper[4805]: I0217 00:25:28.844268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:29 crc kubenswrapper[4805]: I0217 00:25:29.365371 4805 patch_prober.go:28] interesting pod/router-default-5444994796-nl2qv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:25:29 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 17 00:25:29 crc kubenswrapper[4805]: [+]process-running ok Feb 17 00:25:29 crc kubenswrapper[4805]: healthz check failed Feb 17 00:25:29 crc kubenswrapper[4805]: I0217 00:25:29.365558 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nl2qv" podUID="9f87cfb8-eb1e-4bbb-82eb-255544ecdef1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:25:29 crc kubenswrapper[4805]: I0217 00:25:29.403305 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 00:25:29 crc kubenswrapper[4805]: W0217 00:25:29.420626 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3856feac_6d7b_4f41_86c8_5a49633ad81b.slice/crio-9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5 WatchSource:0}: Error finding container 9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5: Status 404 returned error can't find the container with id 9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5 Feb 17 00:25:29 crc kubenswrapper[4805]: I0217 00:25:29.636939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3856feac-6d7b-4f41-86c8-5a49633ad81b","Type":"ContainerStarted","Data":"9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5"} Feb 17 00:25:30 crc kubenswrapper[4805]: I0217 00:25:30.372800 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:30 crc kubenswrapper[4805]: I0217 00:25:30.376121 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nl2qv" Feb 17 00:25:30 crc kubenswrapper[4805]: I0217 00:25:30.665046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3856feac-6d7b-4f41-86c8-5a49633ad81b","Type":"ContainerStarted","Data":"e8041379b138265b83b8d606e05748c7005a53595f410a64614d2bc2459287e6"} Feb 17 00:25:30 crc kubenswrapper[4805]: I0217 00:25:30.680153 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.6801382929999997 podStartE2EDuration="2.680138293s" podCreationTimestamp="2026-02-17 00:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:30.67798979 +0000 UTC m=+156.693799188" watchObservedRunningTime="2026-02-17 00:25:30.680138293 +0000 UTC m=+156.695947691" Feb 17 00:25:31 crc kubenswrapper[4805]: I0217 00:25:31.523407 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l2g7w" Feb 17 00:25:31 crc kubenswrapper[4805]: I0217 00:25:31.676988 4805 generic.go:334] "Generic (PLEG): container finished" podID="3856feac-6d7b-4f41-86c8-5a49633ad81b" containerID="e8041379b138265b83b8d606e05748c7005a53595f410a64614d2bc2459287e6" exitCode=0 Feb 17 00:25:31 crc kubenswrapper[4805]: I0217 00:25:31.677031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3856feac-6d7b-4f41-86c8-5a49633ad81b","Type":"ContainerDied","Data":"e8041379b138265b83b8d606e05748c7005a53595f410a64614d2bc2459287e6"} Feb 17 00:25:35 crc kubenswrapper[4805]: I0217 00:25:35.365630 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:35 crc kubenswrapper[4805]: I0217 00:25:35.372435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:25:35 crc kubenswrapper[4805]: I0217 00:25:35.664733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tnfnz" Feb 17 00:25:37 crc kubenswrapper[4805]: I0217 00:25:37.776806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:37 crc kubenswrapper[4805]: I0217 00:25:37.800716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86b8a270-8cb3-4266-9fe0-3cfd027a9174-metrics-certs\") pod \"network-metrics-daemon-jnv59\" (UID: \"86b8a270-8cb3-4266-9fe0-3cfd027a9174\") " pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:38 crc kubenswrapper[4805]: I0217 00:25:38.023382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jnv59" Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.750555 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.750719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3856feac-6d7b-4f41-86c8-5a49633ad81b","Type":"ContainerDied","Data":"9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5"} Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.751035 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfa07c07f109ff9b5af25a0062d526315442c22ea2bfcf858e73c95c0eee0a5" Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.907039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access\") pod \"3856feac-6d7b-4f41-86c8-5a49633ad81b\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.907189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir\") pod \"3856feac-6d7b-4f41-86c8-5a49633ad81b\" (UID: \"3856feac-6d7b-4f41-86c8-5a49633ad81b\") " Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.907368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3856feac-6d7b-4f41-86c8-5a49633ad81b" (UID: "3856feac-6d7b-4f41-86c8-5a49633ad81b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.908147 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3856feac-6d7b-4f41-86c8-5a49633ad81b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:39 crc kubenswrapper[4805]: I0217 00:25:39.912469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3856feac-6d7b-4f41-86c8-5a49633ad81b" (UID: "3856feac-6d7b-4f41-86c8-5a49633ad81b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:25:40 crc kubenswrapper[4805]: I0217 00:25:40.009870 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3856feac-6d7b-4f41-86c8-5a49633ad81b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:40 crc kubenswrapper[4805]: I0217 00:25:40.755984 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 00:25:42 crc kubenswrapper[4805]: I0217 00:25:42.665744 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:25:49 crc kubenswrapper[4805]: I0217 00:25:49.819960 4805 generic.go:334] "Generic (PLEG): container finished" podID="0ed7bf5a-a6c8-47a3-8e66-0401495250f3" containerID="ae2b8acec10cf8d060bb090cfc76bf537c41996b4b62f0f6d82800173b284262" exitCode=0 Feb 17 00:25:49 crc kubenswrapper[4805]: I0217 00:25:49.820077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-8tt24" event={"ID":"0ed7bf5a-a6c8-47a3-8e66-0401495250f3","Type":"ContainerDied","Data":"ae2b8acec10cf8d060bb090cfc76bf537c41996b4b62f0f6d82800173b284262"} Feb 17 00:25:53 crc kubenswrapper[4805]: I0217 00:25:53.077410 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:25:53 crc kubenswrapper[4805]: I0217 00:25:53.077805 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:25:53 crc kubenswrapper[4805]: I0217 00:25:53.845784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-8tt24" event={"ID":"0ed7bf5a-a6c8-47a3-8e66-0401495250f3","Type":"ContainerDied","Data":"4abc118e4bfd62204c58d81c7e9cba120175cca2291e708a9f46a7a47fe4e36d"} Feb 17 00:25:53 crc kubenswrapper[4805]: I0217 00:25:53.846006 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4abc118e4bfd62204c58d81c7e9cba120175cca2291e708a9f46a7a47fe4e36d" Feb 17 00:25:53 crc kubenswrapper[4805]: I0217 00:25:53.921064 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.018258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpllg\" (UniqueName: \"kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg\") pod \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.018317 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca\") pod \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\" (UID: \"0ed7bf5a-a6c8-47a3-8e66-0401495250f3\") " Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.019558 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca" (OuterVolumeSpecName: "serviceca") pod "0ed7bf5a-a6c8-47a3-8e66-0401495250f3" (UID: "0ed7bf5a-a6c8-47a3-8e66-0401495250f3"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.035839 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg" (OuterVolumeSpecName: "kube-api-access-xpllg") pod "0ed7bf5a-a6c8-47a3-8e66-0401495250f3" (UID: "0ed7bf5a-a6c8-47a3-8e66-0401495250f3"). InnerVolumeSpecName "kube-api-access-xpllg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.120101 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpllg\" (UniqueName: \"kubernetes.io/projected/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-kube-api-access-xpllg\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.120128 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0ed7bf5a-a6c8-47a3-8e66-0401495250f3-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.331346 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jnv59"] Feb 17 00:25:54 crc kubenswrapper[4805]: W0217 00:25:54.347235 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b8a270_8cb3_4266_9fe0_3cfd027a9174.slice/crio-a0a1758d113715547c92283ca05c393f14b561867a91e41cbb54b8aaa7a56359 WatchSource:0}: Error finding container a0a1758d113715547c92283ca05c393f14b561867a91e41cbb54b8aaa7a56359: Status 404 returned error can't find the container with id a0a1758d113715547c92283ca05c393f14b561867a91e41cbb54b8aaa7a56359 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.854063 4805 generic.go:334] "Generic (PLEG): container finished" podID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerID="a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.854143 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerDied","Data":"a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.856275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jnv59" event={"ID":"86b8a270-8cb3-4266-9fe0-3cfd027a9174","Type":"ContainerStarted","Data":"a0a1758d113715547c92283ca05c393f14b561867a91e41cbb54b8aaa7a56359"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.858671 4805 generic.go:334] "Generic (PLEG): container finished" podID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerID="fa390b0d307a68d5bcaa7b8c1f963e1e2b3d668631e63931c3cafa4d379e5eae" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.859156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerDied","Data":"fa390b0d307a68d5bcaa7b8c1f963e1e2b3d668631e63931c3cafa4d379e5eae"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.871534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerStarted","Data":"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.880031 4805 generic.go:334] "Generic (PLEG): container finished" podID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerID="fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.880879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerDied","Data":"fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.885962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerStarted","Data":"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.894387 4805 generic.go:334] "Generic (PLEG): container finished" podID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerID="2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.894606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerDied","Data":"2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.911628 4805 generic.go:334] "Generic (PLEG): container finished" podID="3f799a43-6325-4943-8c49-58ad9822eb77" containerID="0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.911715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerDied","Data":"0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.919188 4805 generic.go:334] "Generic (PLEG): container finished" podID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerID="79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001" exitCode=0 Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.919299 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerDied","Data":"79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001"} Feb 17 00:25:54 crc kubenswrapper[4805]: I0217 00:25:54.919569 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-8tt24" Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.927559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jnv59" event={"ID":"86b8a270-8cb3-4266-9fe0-3cfd027a9174","Type":"ContainerStarted","Data":"2a2842f95730ad60aa02600d37f5312b709897b26e20ec66fbe26e3093ad58cf"} Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.927900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jnv59" event={"ID":"86b8a270-8cb3-4266-9fe0-3cfd027a9174","Type":"ContainerStarted","Data":"7c4eb84a5c6f7be2c93177e72a359c85ec60b9fc96406def8085f372665b392b"} Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.929042 4805 generic.go:334] "Generic (PLEG): container finished" podID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerID="01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224" exitCode=0 Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.929135 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerDied","Data":"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224"} Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.931052 4805 generic.go:334] "Generic (PLEG): container finished" podID="2342db7f-2c3a-431e-a891-e844a7284298" containerID="2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054" exitCode=0 Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.931100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerDied","Data":"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054"} Feb 17 00:25:55 crc kubenswrapper[4805]: I0217 00:25:55.943536 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jnv59" podStartSLOduration=160.943517927 podStartE2EDuration="2m40.943517927s" podCreationTimestamp="2026-02-17 00:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:25:55.943123125 +0000 UTC m=+181.958932533" watchObservedRunningTime="2026-02-17 00:25:55.943517927 +0000 UTC m=+181.959327335" Feb 17 00:25:56 crc kubenswrapper[4805]: I0217 00:25:56.451972 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cp7v9" Feb 17 00:25:56 crc kubenswrapper[4805]: I0217 00:25:56.938504 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerStarted","Data":"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6"} Feb 17 00:25:57 crc kubenswrapper[4805]: I0217 00:25:57.961922 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jg6vt" podStartSLOduration=3.840333685 podStartE2EDuration="35.961906372s" podCreationTimestamp="2026-02-17 00:25:22 +0000 UTC" firstStartedPulling="2026-02-17 00:25:24.250116593 +0000 UTC m=+150.265925991" lastFinishedPulling="2026-02-17 00:25:56.37168928 +0000 UTC m=+182.387498678" observedRunningTime="2026-02-17 00:25:57.959871402 +0000 UTC m=+183.975680810" watchObservedRunningTime="2026-02-17 00:25:57.961906372 +0000 UTC m=+183.977715770" Feb 17 00:25:58 crc kubenswrapper[4805]: I0217 00:25:58.949857 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerStarted","Data":"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be"} Feb 17 00:25:58 crc kubenswrapper[4805]: I0217 00:25:58.967900 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bv74b" podStartSLOduration=3.543585478 podStartE2EDuration="35.96788307s" podCreationTimestamp="2026-02-17 00:25:23 +0000 UTC" firstStartedPulling="2026-02-17 00:25:25.401528839 +0000 UTC m=+151.417338237" lastFinishedPulling="2026-02-17 00:25:57.825826391 +0000 UTC m=+183.841635829" observedRunningTime="2026-02-17 00:25:58.964914002 +0000 UTC m=+184.980723410" watchObservedRunningTime="2026-02-17 00:25:58.96788307 +0000 UTC m=+184.983692468" Feb 17 00:25:59 crc kubenswrapper[4805]: I0217 00:25:59.966665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerStarted","Data":"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622"} Feb 17 00:25:59 crc kubenswrapper[4805]: I0217 00:25:59.993167 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r825t" podStartSLOduration=3.075696727 podStartE2EDuration="37.993152929s" podCreationTimestamp="2026-02-17 00:25:22 +0000 UTC" firstStartedPulling="2026-02-17 00:25:24.269225279 +0000 UTC m=+150.285034677" lastFinishedPulling="2026-02-17 00:25:59.186681481 +0000 UTC m=+185.202490879" observedRunningTime="2026-02-17 00:25:59.992892681 +0000 UTC m=+186.008702079" watchObservedRunningTime="2026-02-17 00:25:59.993152929 +0000 UTC m=+186.008962327" Feb 17 00:26:00 crc kubenswrapper[4805]: I0217 00:26:00.973631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerStarted","Data":"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d"} Feb 17 00:26:00 crc kubenswrapper[4805]: I0217 00:26:00.997817 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gstnk" podStartSLOduration=2.875708133 podStartE2EDuration="38.997796277s" podCreationTimestamp="2026-02-17 00:25:22 +0000 UTC" firstStartedPulling="2026-02-17 00:25:24.271412784 +0000 UTC m=+150.287222192" lastFinishedPulling="2026-02-17 00:26:00.393500898 +0000 UTC m=+186.409310336" observedRunningTime="2026-02-17 00:26:00.995385106 +0000 UTC m=+187.011194504" watchObservedRunningTime="2026-02-17 00:26:00.997796277 +0000 UTC m=+187.013605675" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.811496 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.812106 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.952455 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.952522 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.985585 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.988486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerStarted","Data":"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646"} Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.990589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerStarted","Data":"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac"} Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.993539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerStarted","Data":"d8b7d77a933637ad8440cb18e43b6c9e0bda02216ee1b0888e8c3c9b0b819508"} Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.995340 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:26:02 crc kubenswrapper[4805]: I0217 00:26:02.995639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerStarted","Data":"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04"} Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.043786 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.046234 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcmd9" podStartSLOduration=3.104159351 podStartE2EDuration="39.046219232s" podCreationTimestamp="2026-02-17 00:25:24 +0000 UTC" firstStartedPulling="2026-02-17 00:25:26.570583786 +0000 UTC m=+152.586393184" lastFinishedPulling="2026-02-17 00:26:02.512643647 +0000 UTC m=+188.528453065" observedRunningTime="2026-02-17 00:26:03.029633121 +0000 UTC m=+189.045442519" watchObservedRunningTime="2026-02-17 00:26:03.046219232 +0000 UTC m=+189.062028630" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.099882 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7hfzb" podStartSLOduration=3.083212071 podStartE2EDuration="38.099865481s" podCreationTimestamp="2026-02-17 00:25:25 +0000 UTC" firstStartedPulling="2026-02-17 00:25:27.619271989 +0000 UTC m=+153.635081387" lastFinishedPulling="2026-02-17 00:26:02.635925399 +0000 UTC m=+188.651734797" observedRunningTime="2026-02-17 00:26:03.068997237 +0000 UTC m=+189.084806635" watchObservedRunningTime="2026-02-17 00:26:03.099865481 +0000 UTC m=+189.115674879" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.101121 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mz92r" podStartSLOduration=2.132879122 podStartE2EDuration="37.101113928s" podCreationTimestamp="2026-02-17 00:25:26 +0000 UTC" firstStartedPulling="2026-02-17 00:25:27.554307895 +0000 UTC m=+153.570117293" lastFinishedPulling="2026-02-17 00:26:02.522542701 +0000 UTC m=+188.538352099" observedRunningTime="2026-02-17 00:26:03.100012316 +0000 UTC m=+189.115821724" watchObservedRunningTime="2026-02-17 00:26:03.101113928 +0000 UTC m=+189.116923326" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.126429 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5xg9" podStartSLOduration=2.19796977 podStartE2EDuration="38.126410558s" podCreationTimestamp="2026-02-17 00:25:25 +0000 UTC" firstStartedPulling="2026-02-17 00:25:26.570647728 +0000 UTC m=+152.586457136" lastFinishedPulling="2026-02-17 00:26:02.499088516 +0000 UTC m=+188.514897924" observedRunningTime="2026-02-17 00:26:03.114872976 +0000 UTC m=+189.130682374" watchObservedRunningTime="2026-02-17 00:26:03.126410558 +0000 UTC m=+189.142219956" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.242479 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.242526 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.292685 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.323852 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.379520 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.379595 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:03 crc kubenswrapper[4805]: I0217 00:26:03.425954 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:04 crc kubenswrapper[4805]: I0217 00:26:04.043019 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:04 crc kubenswrapper[4805]: I0217 00:26:04.046025 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:26:04 crc kubenswrapper[4805]: I0217 00:26:04.212304 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:26:04 crc kubenswrapper[4805]: I0217 00:26:04.954047 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:26:04 crc kubenswrapper[4805]: I0217 00:26:04.954390 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.323971 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 00:26:05 crc kubenswrapper[4805]: E0217 00:26:05.324207 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed7bf5a-a6c8-47a3-8e66-0401495250f3" containerName="image-pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.324218 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed7bf5a-a6c8-47a3-8e66-0401495250f3" containerName="image-pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: E0217 00:26:05.324235 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856feac-6d7b-4f41-86c8-5a49633ad81b" containerName="pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.324243 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856feac-6d7b-4f41-86c8-5a49633ad81b" containerName="pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.324348 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed7bf5a-a6c8-47a3-8e66-0401495250f3" containerName="image-pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.324359 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856feac-6d7b-4f41-86c8-5a49633ad81b" containerName="pruner" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.324700 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.327090 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.331843 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.340334 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.387127 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.387170 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.420044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.420085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.425505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.521787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.521887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.521980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.540110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.640433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.853447 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.966703 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.966755 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:26:05 crc kubenswrapper[4805]: I0217 00:26:05.998645 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fcmd9" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="registry-server" probeResult="failure" output=< Feb 17 00:26:05 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 00:26:05 crc kubenswrapper[4805]: > Feb 17 00:26:06 crc kubenswrapper[4805]: I0217 00:26:06.017892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"77d3fb73-5029-4985-921d-2ae33134a7fe","Type":"ContainerStarted","Data":"09fa7c696a566e1fc44237c79bd67bc43089fe0195ba455fb00e6f8c0422c0bd"} Feb 17 00:26:06 crc kubenswrapper[4805]: I0217 00:26:06.432366 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:06 crc kubenswrapper[4805]: I0217 00:26:06.432432 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.016489 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7hfzb" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="registry-server" probeResult="failure" output=< Feb 17 00:26:07 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 00:26:07 crc kubenswrapper[4805]: > Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.026677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"77d3fb73-5029-4985-921d-2ae33134a7fe","Type":"ContainerStarted","Data":"29216220087bbda0fd47206dffe003300cd9cd651a09a7d35f4fc27d2924cf8d"} Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.043087 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.043071791 podStartE2EDuration="2.043071791s" podCreationTimestamp="2026-02-17 00:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:26:07.041597217 +0000 UTC m=+193.057406615" watchObservedRunningTime="2026-02-17 00:26:07.043071791 +0000 UTC m=+193.058881189" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.084379 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.084574 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bv74b" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="registry-server" containerID="cri-o://57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be" gracePeriod=2 Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.474649 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mz92r" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="registry-server" probeResult="failure" output=< Feb 17 00:26:07 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 00:26:07 crc kubenswrapper[4805]: > Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.543196 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.672095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5hwp\" (UniqueName: \"kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp\") pod \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.672212 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities\") pod \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.672245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content\") pod \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\" (UID: \"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741\") " Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.673168 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities" (OuterVolumeSpecName: "utilities") pod "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" (UID: "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.679489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp" (OuterVolumeSpecName: "kube-api-access-m5hwp") pod "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" (UID: "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741"). InnerVolumeSpecName "kube-api-access-m5hwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.720073 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" (UID: "ee6fe5f1-e028-4ff7-9edb-f547d9f7e741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.773424 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.773466 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:07 crc kubenswrapper[4805]: I0217 00:26:07.773482 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5hwp\" (UniqueName: \"kubernetes.io/projected/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741-kube-api-access-m5hwp\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.034833 4805 generic.go:334] "Generic (PLEG): container finished" podID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerID="57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be" exitCode=0 Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.034877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerDied","Data":"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be"} Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.034900 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv74b" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.034926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv74b" event={"ID":"ee6fe5f1-e028-4ff7-9edb-f547d9f7e741","Type":"ContainerDied","Data":"2f32bd46ff9f1eb907008350e0eee123d3a570ae3d49412df0eb033ddabc9658"} Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.034945 4805 scope.go:117] "RemoveContainer" containerID="57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.036629 4805 generic.go:334] "Generic (PLEG): container finished" podID="77d3fb73-5029-4985-921d-2ae33134a7fe" containerID="29216220087bbda0fd47206dffe003300cd9cd651a09a7d35f4fc27d2924cf8d" exitCode=0 Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.036658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"77d3fb73-5029-4985-921d-2ae33134a7fe","Type":"ContainerDied","Data":"29216220087bbda0fd47206dffe003300cd9cd651a09a7d35f4fc27d2924cf8d"} Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.067180 4805 scope.go:117] "RemoveContainer" containerID="79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.090533 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.094830 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bv74b"] Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.095464 4805 scope.go:117] "RemoveContainer" containerID="3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.116542 4805 scope.go:117] "RemoveContainer" containerID="57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be" Feb 17 00:26:08 crc kubenswrapper[4805]: E0217 00:26:08.116908 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be\": container with ID starting with 57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be not found: ID does not exist" containerID="57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.116969 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be"} err="failed to get container status \"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be\": rpc error: code = NotFound desc = could not find container \"57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be\": container with ID starting with 57ea4dbc46a07ec67e4c3c72f3dfea427adfcbeee20482c4cfbe8af0970af1be not found: ID does not exist" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.117013 4805 scope.go:117] "RemoveContainer" containerID="79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001" Feb 17 00:26:08 crc kubenswrapper[4805]: E0217 00:26:08.117292 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001\": container with ID starting with 79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001 not found: ID does not exist" containerID="79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.117317 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001"} err="failed to get container status \"79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001\": rpc error: code = NotFound desc = could not find container \"79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001\": container with ID starting with 79d4bc4d1576c88122fb2eded6821731d4c84dbe1ab067baa3b4759187594001 not found: ID does not exist" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.117363 4805 scope.go:117] "RemoveContainer" containerID="3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326" Feb 17 00:26:08 crc kubenswrapper[4805]: E0217 00:26:08.117734 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326\": container with ID starting with 3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326 not found: ID does not exist" containerID="3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.117757 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326"} err="failed to get container status \"3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326\": rpc error: code = NotFound desc = could not find container \"3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326\": container with ID starting with 3e2fb61c1503979852241e942510ec93491f4b45cc6a368ab87340617259d326 not found: ID does not exist" Feb 17 00:26:08 crc kubenswrapper[4805]: I0217 00:26:08.791203 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" path="/var/lib/kubelet/pods/ee6fe5f1-e028-4ff7-9edb-f547d9f7e741/volumes" Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.295555 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.397486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir\") pod \"77d3fb73-5029-4985-921d-2ae33134a7fe\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.397565 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access\") pod \"77d3fb73-5029-4985-921d-2ae33134a7fe\" (UID: \"77d3fb73-5029-4985-921d-2ae33134a7fe\") " Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.397613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "77d3fb73-5029-4985-921d-2ae33134a7fe" (UID: "77d3fb73-5029-4985-921d-2ae33134a7fe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.397837 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77d3fb73-5029-4985-921d-2ae33134a7fe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.401170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77d3fb73-5029-4985-921d-2ae33134a7fe" (UID: "77d3fb73-5029-4985-921d-2ae33134a7fe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:09 crc kubenswrapper[4805]: I0217 00:26:09.498978 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77d3fb73-5029-4985-921d-2ae33134a7fe-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:10 crc kubenswrapper[4805]: I0217 00:26:10.048593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"77d3fb73-5029-4985-921d-2ae33134a7fe","Type":"ContainerDied","Data":"09fa7c696a566e1fc44237c79bd67bc43089fe0195ba455fb00e6f8c0422c0bd"} Feb 17 00:26:10 crc kubenswrapper[4805]: I0217 00:26:10.048631 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fa7c696a566e1fc44237c79bd67bc43089fe0195ba455fb00e6f8c0422c0bd" Feb 17 00:26:10 crc kubenswrapper[4805]: I0217 00:26:10.048649 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.282885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912470 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 00:26:13 crc kubenswrapper[4805]: E0217 00:26:13.912679 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="extract-utilities" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912691 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="extract-utilities" Feb 17 00:26:13 crc kubenswrapper[4805]: E0217 00:26:13.912704 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="registry-server" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912711 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="registry-server" Feb 17 00:26:13 crc kubenswrapper[4805]: E0217 00:26:13.912721 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="extract-content" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912726 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="extract-content" Feb 17 00:26:13 crc kubenswrapper[4805]: E0217 00:26:13.912743 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77d3fb73-5029-4985-921d-2ae33134a7fe" containerName="pruner" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912749 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="77d3fb73-5029-4985-921d-2ae33134a7fe" containerName="pruner" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912843 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6fe5f1-e028-4ff7-9edb-f547d9f7e741" containerName="registry-server" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.912855 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="77d3fb73-5029-4985-921d-2ae33134a7fe" containerName="pruner" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.913166 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.916801 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.916979 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 00:26:13 crc kubenswrapper[4805]: I0217 00:26:13.922046 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.058583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.058665 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.058718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.160652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.160766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.160813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.160960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.161206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.192480 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access\") pod \"installer-9-crc\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.230038 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:14 crc kubenswrapper[4805]: I0217 00:26:14.429795 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.076826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"142f7345-c74d-4880-8c0e-ca32d39e9d78","Type":"ContainerStarted","Data":"ef548f280666787897820e62b49a33f734db2ca4fedda70339a60b3dadeba681"} Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.292936 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.293273 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gstnk" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="registry-server" containerID="cri-o://64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d" gracePeriod=2 Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.297395 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.360138 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.458006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.631491 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.780952 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8twhg\" (UniqueName: \"kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg\") pod \"a77c3401-47c1-41a8-806a-0bdb1ad48302\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.781073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content\") pod \"a77c3401-47c1-41a8-806a-0bdb1ad48302\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.781136 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities\") pod \"a77c3401-47c1-41a8-806a-0bdb1ad48302\" (UID: \"a77c3401-47c1-41a8-806a-0bdb1ad48302\") " Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.782483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities" (OuterVolumeSpecName: "utilities") pod "a77c3401-47c1-41a8-806a-0bdb1ad48302" (UID: "a77c3401-47c1-41a8-806a-0bdb1ad48302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.789038 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg" (OuterVolumeSpecName: "kube-api-access-8twhg") pod "a77c3401-47c1-41a8-806a-0bdb1ad48302" (UID: "a77c3401-47c1-41a8-806a-0bdb1ad48302"). InnerVolumeSpecName "kube-api-access-8twhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.869093 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a77c3401-47c1-41a8-806a-0bdb1ad48302" (UID: "a77c3401-47c1-41a8-806a-0bdb1ad48302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.883148 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8twhg\" (UniqueName: \"kubernetes.io/projected/a77c3401-47c1-41a8-806a-0bdb1ad48302-kube-api-access-8twhg\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.883198 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:15 crc kubenswrapper[4805]: I0217 00:26:15.883231 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a77c3401-47c1-41a8-806a-0bdb1ad48302-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.010201 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.060433 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.084586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"142f7345-c74d-4880-8c0e-ca32d39e9d78","Type":"ContainerStarted","Data":"a6fe4aa60d36f968936b82169147a0a4a0d0fe6ca9a59e7a9f97948ddd274d77"} Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.090108 4805 generic.go:334] "Generic (PLEG): container finished" podID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerID="64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d" exitCode=0 Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.090993 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gstnk" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.101386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerDied","Data":"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d"} Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.101479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gstnk" event={"ID":"a77c3401-47c1-41a8-806a-0bdb1ad48302","Type":"ContainerDied","Data":"f9d45cb0704f67453ae2f381dcd630de590077c7ca6eae7d51861cc8c95ce4bf"} Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.101528 4805 scope.go:117] "RemoveContainer" containerID="64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.113406 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.113391133 podStartE2EDuration="3.113391133s" podCreationTimestamp="2026-02-17 00:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:26:16.11164411 +0000 UTC m=+202.127453508" watchObservedRunningTime="2026-02-17 00:26:16.113391133 +0000 UTC m=+202.129200531" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.122611 4805 scope.go:117] "RemoveContainer" containerID="a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.126786 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.139817 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gstnk"] Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.155874 4805 scope.go:117] "RemoveContainer" containerID="0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.168004 4805 scope.go:117] "RemoveContainer" containerID="64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d" Feb 17 00:26:16 crc kubenswrapper[4805]: E0217 00:26:16.168430 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d\": container with ID starting with 64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d not found: ID does not exist" containerID="64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.168553 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d"} err="failed to get container status \"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d\": rpc error: code = NotFound desc = could not find container \"64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d\": container with ID starting with 64950050213e22d1ab620365d7c4778a59ba849ce2fdcb0e467e59d80ceb004d not found: ID does not exist" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.168607 4805 scope.go:117] "RemoveContainer" containerID="a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03" Feb 17 00:26:16 crc kubenswrapper[4805]: E0217 00:26:16.169185 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03\": container with ID starting with a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03 not found: ID does not exist" containerID="a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.169219 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03"} err="failed to get container status \"a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03\": rpc error: code = NotFound desc = could not find container \"a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03\": container with ID starting with a44050be338c8e770710bd41e0908eeef7482811ead9f5ca45701857d51f5d03 not found: ID does not exist" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.169261 4805 scope.go:117] "RemoveContainer" containerID="0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad" Feb 17 00:26:16 crc kubenswrapper[4805]: E0217 00:26:16.169593 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad\": container with ID starting with 0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad not found: ID does not exist" containerID="0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.169641 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad"} err="failed to get container status \"0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad\": rpc error: code = NotFound desc = could not find container \"0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad\": container with ID starting with 0bf0a268aed44a681fe6ab28919de5c1bb4b3db1368053b6666b1a2e9f91fdad not found: ID does not exist" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.517132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.599251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:16 crc kubenswrapper[4805]: I0217 00:26:16.790762 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" path="/var/lib/kubelet/pods/a77c3401-47c1-41a8-806a-0bdb1ad48302/volumes" Feb 17 00:26:17 crc kubenswrapper[4805]: I0217 00:26:17.688012 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:26:17 crc kubenswrapper[4805]: I0217 00:26:17.688423 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5xg9" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="registry-server" containerID="cri-o://d8b7d77a933637ad8440cb18e43b6c9e0bda02216ee1b0888e8c3c9b0b819508" gracePeriod=2 Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.106197 4805 generic.go:334] "Generic (PLEG): container finished" podID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerID="d8b7d77a933637ad8440cb18e43b6c9e0bda02216ee1b0888e8c3c9b0b819508" exitCode=0 Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.106288 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerDied","Data":"d8b7d77a933637ad8440cb18e43b6c9e0bda02216ee1b0888e8c3c9b0b819508"} Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.106583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5xg9" event={"ID":"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77","Type":"ContainerDied","Data":"8e47ad67346a861206c3ca6c4a049d07e514209468cd6a3464004d81a4fbda5e"} Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.106603 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e47ad67346a861206c3ca6c4a049d07e514209468cd6a3464004d81a4fbda5e" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.108443 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.228886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities\") pod \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.228994 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7k64\" (UniqueName: \"kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64\") pod \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.229054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content\") pod \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\" (UID: \"d4d2c36c-b305-4234-b3aa-31b0c3cd7f77\") " Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.229905 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities" (OuterVolumeSpecName: "utilities") pod "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" (UID: "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.237779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64" (OuterVolumeSpecName: "kube-api-access-p7k64") pod "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" (UID: "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77"). InnerVolumeSpecName "kube-api-access-p7k64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.253086 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" (UID: "d4d2c36c-b305-4234-b3aa-31b0c3cd7f77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.330253 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.330309 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7k64\" (UniqueName: \"kubernetes.io/projected/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-kube-api-access-p7k64\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:18 crc kubenswrapper[4805]: I0217 00:26:18.330362 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.111412 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5xg9" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.131880 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.135824 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5xg9"] Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.490076 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.490553 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mz92r" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="registry-server" containerID="cri-o://2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04" gracePeriod=2 Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.818933 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.849738 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsw2w\" (UniqueName: \"kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w\") pod \"2342db7f-2c3a-431e-a891-e844a7284298\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.849846 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content\") pod \"2342db7f-2c3a-431e-a891-e844a7284298\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.849912 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities\") pod \"2342db7f-2c3a-431e-a891-e844a7284298\" (UID: \"2342db7f-2c3a-431e-a891-e844a7284298\") " Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.850644 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities" (OuterVolumeSpecName: "utilities") pod "2342db7f-2c3a-431e-a891-e844a7284298" (UID: "2342db7f-2c3a-431e-a891-e844a7284298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.858147 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w" (OuterVolumeSpecName: "kube-api-access-wsw2w") pod "2342db7f-2c3a-431e-a891-e844a7284298" (UID: "2342db7f-2c3a-431e-a891-e844a7284298"). InnerVolumeSpecName "kube-api-access-wsw2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.951739 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsw2w\" (UniqueName: \"kubernetes.io/projected/2342db7f-2c3a-431e-a891-e844a7284298-kube-api-access-wsw2w\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:19 crc kubenswrapper[4805]: I0217 00:26:19.951774 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.022009 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2342db7f-2c3a-431e-a891-e844a7284298" (UID: "2342db7f-2c3a-431e-a891-e844a7284298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.052929 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2342db7f-2c3a-431e-a891-e844a7284298-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.119600 4805 generic.go:334] "Generic (PLEG): container finished" podID="2342db7f-2c3a-431e-a891-e844a7284298" containerID="2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04" exitCode=0 Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.119672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerDied","Data":"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04"} Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.119725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz92r" event={"ID":"2342db7f-2c3a-431e-a891-e844a7284298","Type":"ContainerDied","Data":"3c19a3d3e349c6cecc717a461f6a2fe69e45d2e54e565cb34133c25bac04874b"} Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.119752 4805 scope.go:117] "RemoveContainer" containerID="2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.119670 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz92r" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.137536 4805 scope.go:117] "RemoveContainer" containerID="2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.159857 4805 scope.go:117] "RemoveContainer" containerID="c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.180348 4805 scope.go:117] "RemoveContainer" containerID="2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04" Feb 17 00:26:20 crc kubenswrapper[4805]: E0217 00:26:20.186047 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04\": container with ID starting with 2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04 not found: ID does not exist" containerID="2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.186107 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04"} err="failed to get container status \"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04\": rpc error: code = NotFound desc = could not find container \"2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04\": container with ID starting with 2ee34516cf939e185069ef73cc7183ee3e9409d6addf698d781b040efc1ecf04 not found: ID does not exist" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.186150 4805 scope.go:117] "RemoveContainer" containerID="2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054" Feb 17 00:26:20 crc kubenswrapper[4805]: E0217 00:26:20.186767 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054\": container with ID starting with 2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054 not found: ID does not exist" containerID="2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.186795 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054"} err="failed to get container status \"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054\": rpc error: code = NotFound desc = could not find container \"2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054\": container with ID starting with 2d16c10e3a0643af4647f44046e55dc1b5059979584ba052476527e36a894054 not found: ID does not exist" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.186816 4805 scope.go:117] "RemoveContainer" containerID="c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.187073 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:26:20 crc kubenswrapper[4805]: E0217 00:26:20.187289 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d\": container with ID starting with c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d not found: ID does not exist" containerID="c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.187354 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d"} err="failed to get container status \"c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d\": rpc error: code = NotFound desc = could not find container \"c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d\": container with ID starting with c7b946015bd939755b24fd6d1e701b065f439bbc94367878f89046eeb9bfe91d not found: ID does not exist" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.190032 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mz92r"] Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.793851 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2342db7f-2c3a-431e-a891-e844a7284298" path="/var/lib/kubelet/pods/2342db7f-2c3a-431e-a891-e844a7284298/volumes" Feb 17 00:26:20 crc kubenswrapper[4805]: I0217 00:26:20.795498 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" path="/var/lib/kubelet/pods/d4d2c36c-b305-4234-b3aa-31b0c3cd7f77/volumes" Feb 17 00:26:23 crc kubenswrapper[4805]: I0217 00:26:23.076912 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:26:23 crc kubenswrapper[4805]: I0217 00:26:23.077239 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:26:23 crc kubenswrapper[4805]: I0217 00:26:23.077302 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:26:23 crc kubenswrapper[4805]: I0217 00:26:23.078050 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:26:23 crc kubenswrapper[4805]: I0217 00:26:23.078139 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287" gracePeriod=600 Feb 17 00:26:24 crc kubenswrapper[4805]: I0217 00:26:24.153157 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287" exitCode=0 Feb 17 00:26:24 crc kubenswrapper[4805]: I0217 00:26:24.153225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287"} Feb 17 00:26:24 crc kubenswrapper[4805]: I0217 00:26:24.153548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961"} Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.243863 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerName="oauth-openshift" containerID="cri-o://9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849" gracePeriod=15 Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.630121 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697649 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697817 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697917 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.697992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698019 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698162 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698243 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698302 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.698792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.699040 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700274 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2rc\" (UniqueName: \"kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc\") pod \"bf20469d-03a9-4939-841d-3c7d28b75aab\" (UID: \"bf20469d-03a9-4939-841d-3c7d28b75aab\") " Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700880 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700909 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.700924 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.702865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.708719 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.709143 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.709628 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.710066 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.711765 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc" (OuterVolumeSpecName: "kube-api-access-7p2rc") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "kube-api-access-7p2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.711856 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.712095 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.714036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.715716 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.716681 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bf20469d-03a9-4939-841d-3c7d28b75aab" (UID: "bf20469d-03a9-4939-841d-3c7d28b75aab"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802164 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802202 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802215 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2rc\" (UniqueName: \"kubernetes.io/projected/bf20469d-03a9-4939-841d-3c7d28b75aab-kube-api-access-7p2rc\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802227 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802239 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802250 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802262 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802276 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802287 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802298 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:29 crc kubenswrapper[4805]: I0217 00:26:29.802310 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf20469d-03a9-4939-841d-3c7d28b75aab-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.198420 4805 generic.go:334] "Generic (PLEG): container finished" podID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerID="9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849" exitCode=0 Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.198519 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" event={"ID":"bf20469d-03a9-4939-841d-3c7d28b75aab","Type":"ContainerDied","Data":"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849"} Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.198588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" event={"ID":"bf20469d-03a9-4939-841d-3c7d28b75aab","Type":"ContainerDied","Data":"a656fe2cf919830ebf9ccf2edd36c202a4007b24b7205357413f95e2686c3913"} Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.198617 4805 scope.go:117] "RemoveContainer" containerID="9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849" Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.198531 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b4l7s" Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.229087 4805 scope.go:117] "RemoveContainer" containerID="9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849" Feb 17 00:26:30 crc kubenswrapper[4805]: E0217 00:26:30.229892 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849\": container with ID starting with 9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849 not found: ID does not exist" containerID="9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849" Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.229977 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849"} err="failed to get container status \"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849\": rpc error: code = NotFound desc = could not find container \"9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849\": container with ID starting with 9f3f8d57d36bcfa7c285e271a4722185a4dd67910294778bd5c7c952e13e0849 not found: ID does not exist" Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.255548 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.258655 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b4l7s"] Feb 17 00:26:30 crc kubenswrapper[4805]: I0217 00:26:30.796421 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" path="/var/lib/kubelet/pods/bf20469d-03a9-4939-841d-3c7d28b75aab/volumes" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.623292 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7"] Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.628936 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.628988 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629021 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629030 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629056 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629067 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629085 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629093 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629104 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerName="oauth-openshift" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629111 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerName="oauth-openshift" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629122 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629222 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629247 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629256 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629267 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629276 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629306 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629314 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="extract-content" Feb 17 00:26:34 crc kubenswrapper[4805]: E0217 00:26:34.629352 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.629365 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="extract-utilities" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.630965 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77c3401-47c1-41a8-806a-0bdb1ad48302" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.630989 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d2c36c-b305-4234-b3aa-31b0c3cd7f77" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.631100 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf20469d-03a9-4939-841d-3c7d28b75aab" containerName="oauth-openshift" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.631127 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2342db7f-2c3a-431e-a891-e844a7284298" containerName="registry-server" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.633198 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.640999 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7"] Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.641715 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.641724 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.641910 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.643205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.643388 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.643747 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.644435 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.644459 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.644525 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.643791 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.644210 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.644960 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683259 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-452fs\" (UniqueName: \"kubernetes.io/projected/5489cb4f-d975-44f5-8b42-0a072d2c288a-kube-api-access-452fs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683811 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.683926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.686673 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.687789 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.706739 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.785715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.785817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.785899 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.785966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-452fs\" (UniqueName: \"kubernetes.io/projected/5489cb4f-d975-44f5-8b42-0a072d2c288a-kube-api-access-452fs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.786513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-dir\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.788828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.789372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.791146 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-audit-policies\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.796957 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.798194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-error\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.798384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-session\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.799267 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-login\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.799623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.799956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.801572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.802543 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.802751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5489cb4f-d975-44f5-8b42-0a072d2c288a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:34 crc kubenswrapper[4805]: I0217 00:26:34.806465 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-452fs\" (UniqueName: \"kubernetes.io/projected/5489cb4f-d975-44f5-8b42-0a072d2c288a-kube-api-access-452fs\") pod \"oauth-openshift-5d4b6f47b4-qsqp7\" (UID: \"5489cb4f-d975-44f5-8b42-0a072d2c288a\") " pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:35 crc kubenswrapper[4805]: I0217 00:26:35.003258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:35 crc kubenswrapper[4805]: I0217 00:26:35.273290 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7"] Feb 17 00:26:36 crc kubenswrapper[4805]: I0217 00:26:36.244165 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" event={"ID":"5489cb4f-d975-44f5-8b42-0a072d2c288a","Type":"ContainerStarted","Data":"11e218d407d2b14fd3ce5ecec9a3faa28255589833aa51d19c67354873068b7c"} Feb 17 00:26:36 crc kubenswrapper[4805]: I0217 00:26:36.244660 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:36 crc kubenswrapper[4805]: I0217 00:26:36.244684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" event={"ID":"5489cb4f-d975-44f5-8b42-0a072d2c288a","Type":"ContainerStarted","Data":"a06b97f310ae6f276f6d99f62b02f0a732072d8785a39ec1e3c4b13ff049fa41"} Feb 17 00:26:36 crc kubenswrapper[4805]: I0217 00:26:36.251537 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" Feb 17 00:26:36 crc kubenswrapper[4805]: I0217 00:26:36.276581 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d4b6f47b4-qsqp7" podStartSLOduration=32.276551139 podStartE2EDuration="32.276551139s" podCreationTimestamp="2026-02-17 00:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:26:36.271340091 +0000 UTC m=+222.287149509" watchObservedRunningTime="2026-02-17 00:26:36.276551139 +0000 UTC m=+222.292360567" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.979218 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.981510 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.988732 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989239 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989230 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c" gracePeriod=15 Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989529 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6" gracePeriod=15 Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989526 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922" gracePeriod=15 Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989526 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370" gracePeriod=15 Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989655 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b" gracePeriod=15 Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989656 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989832 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989843 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989861 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989876 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989882 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989891 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989898 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989922 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989927 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.989938 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.989945 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990130 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990140 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990153 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990167 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990176 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990185 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990192 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 00:26:52 crc kubenswrapper[4805]: E0217 00:26:52.990295 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:52 crc kubenswrapper[4805]: I0217 00:26:52.990303 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053651 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.053951 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155571 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155717 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155800 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155866 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.155891 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.361206 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.363264 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.364452 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c" exitCode=0 Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.364490 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370" exitCode=0 Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.364504 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922" exitCode=0 Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.364516 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6" exitCode=2 Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.364555 4805 scope.go:117] "RemoveContainer" containerID="99fc1ffe4a142edb9a9f226d0a1b80d7ef0abacb5832073c9eda788163cc70c9" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.367278 4805 generic.go:334] "Generic (PLEG): container finished" podID="142f7345-c74d-4880-8c0e-ca32d39e9d78" containerID="a6fe4aa60d36f968936b82169147a0a4a0d0fe6ca9a59e7a9f97948ddd274d77" exitCode=0 Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.367350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"142f7345-c74d-4880-8c0e-ca32d39e9d78","Type":"ContainerDied","Data":"a6fe4aa60d36f968936b82169147a0a4a0d0fe6ca9a59e7a9f97948ddd274d77"} Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.368489 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.368817 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.921171 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 17 00:26:53 crc kubenswrapper[4805]: I0217 00:26:53.921834 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.374919 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.607580 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.608137 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.608522 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.672926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock\") pod \"142f7345-c74d-4880-8c0e-ca32d39e9d78\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.672983 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access\") pod \"142f7345-c74d-4880-8c0e-ca32d39e9d78\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.672993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock" (OuterVolumeSpecName: "var-lock") pod "142f7345-c74d-4880-8c0e-ca32d39e9d78" (UID: "142f7345-c74d-4880-8c0e-ca32d39e9d78"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.673019 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir\") pod \"142f7345-c74d-4880-8c0e-ca32d39e9d78\" (UID: \"142f7345-c74d-4880-8c0e-ca32d39e9d78\") " Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.673035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "142f7345-c74d-4880-8c0e-ca32d39e9d78" (UID: "142f7345-c74d-4880-8c0e-ca32d39e9d78"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.673193 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.673204 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/142f7345-c74d-4880-8c0e-ca32d39e9d78-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.681440 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "142f7345-c74d-4880-8c0e-ca32d39e9d78" (UID: "142f7345-c74d-4880-8c0e-ca32d39e9d78"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.774481 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/142f7345-c74d-4880-8c0e-ca32d39e9d78-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.789805 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:54 crc kubenswrapper[4805]: I0217 00:26:54.790488 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.111166 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.111908 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.112288 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.112783 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.113481 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.113501 4805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.113725 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.314707 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.348937 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.350046 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.350727 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.351146 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.383979 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.384668 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b" exitCode=0 Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.384742 4805 scope.go:117] "RemoveContainer" containerID="d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.384878 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.386224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.386354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.386575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"142f7345-c74d-4880-8c0e-ca32d39e9d78","Type":"ContainerDied","Data":"ef548f280666787897820e62b49a33f734db2ca4fedda70339a60b3dadeba681"} Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.387412 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef548f280666787897820e62b49a33f734db2ca4fedda70339a60b3dadeba681" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.386619 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.387458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.387742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.387901 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.388018 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.388589 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.388764 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.388892 4805 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.394058 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.395526 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.402871 4805 scope.go:117] "RemoveContainer" containerID="b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.416711 4805 scope.go:117] "RemoveContainer" containerID="f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.431898 4805 scope.go:117] "RemoveContainer" containerID="ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.445730 4805 scope.go:117] "RemoveContainer" containerID="c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.459740 4805 scope.go:117] "RemoveContainer" containerID="14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.481533 4805 scope.go:117] "RemoveContainer" containerID="d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.482229 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\": container with ID starting with d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c not found: ID does not exist" containerID="d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.482263 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c"} err="failed to get container status \"d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\": rpc error: code = NotFound desc = could not find container \"d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c\": container with ID starting with d954ef9de2b46578923dc74bbcf274ced7fddd88a234ca18b2092a9a5ec9ae9c not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.482287 4805 scope.go:117] "RemoveContainer" containerID="b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.482720 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\": container with ID starting with b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370 not found: ID does not exist" containerID="b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.482757 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370"} err="failed to get container status \"b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\": rpc error: code = NotFound desc = could not find container \"b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370\": container with ID starting with b1c4573c15b7917102f04763e7e263aa38b4d612460c514ab5339e19c65db370 not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.482784 4805 scope.go:117] "RemoveContainer" containerID="f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.483093 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\": container with ID starting with f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922 not found: ID does not exist" containerID="f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483113 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922"} err="failed to get container status \"f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\": rpc error: code = NotFound desc = could not find container \"f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922\": container with ID starting with f913b28f02010678ca3e30b99659e4883f24f5720e27d9de1f6c4d12eb5b1922 not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483125 4805 scope.go:117] "RemoveContainer" containerID="ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.483372 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\": container with ID starting with ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6 not found: ID does not exist" containerID="ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483395 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6"} err="failed to get container status \"ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\": rpc error: code = NotFound desc = could not find container \"ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6\": container with ID starting with ed41f23872410d7d569bc45c5178edb7672ef0a830b5563aa6a3b078885684c6 not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483408 4805 scope.go:117] "RemoveContainer" containerID="c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.483664 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\": container with ID starting with c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b not found: ID does not exist" containerID="c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483684 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b"} err="failed to get container status \"c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\": rpc error: code = NotFound desc = could not find container \"c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b\": container with ID starting with c295aa62f0520512489ce0e06225762c7be3863ef1fe6ab2d2705cff7fe0897b not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.483698 4805 scope.go:117] "RemoveContainer" containerID="14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.484060 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\": container with ID starting with 14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c not found: ID does not exist" containerID="14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.484079 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c"} err="failed to get container status \"14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\": rpc error: code = NotFound desc = could not find container \"14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c\": container with ID starting with 14c1a93d3105691ded27202acd0b9563764f6a2b4af4593890175a6c0eb9806c not found: ID does not exist" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.699784 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: I0217 00:26:55.700311 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.715958 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Feb 17 00:26:55 crc kubenswrapper[4805]: E0217 00:26:55.786480 4805 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" volumeName="registry-storage" Feb 17 00:26:56 crc kubenswrapper[4805]: E0217 00:26:56.516746 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Feb 17 00:26:56 crc kubenswrapper[4805]: I0217 00:26:56.790095 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 00:26:58 crc kubenswrapper[4805]: E0217 00:26:58.030902 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:58 crc kubenswrapper[4805]: I0217 00:26:58.031559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:58 crc kubenswrapper[4805]: E0217 00:26:58.060444 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894e111575d6694 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:26:58.059880084 +0000 UTC m=+244.075689482,LastTimestamp:2026-02-17 00:26:58.059880084 +0000 UTC m=+244.075689482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:26:58 crc kubenswrapper[4805]: E0217 00:26:58.118149 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Feb 17 00:26:58 crc kubenswrapper[4805]: I0217 00:26:58.406032 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef"} Feb 17 00:26:58 crc kubenswrapper[4805]: I0217 00:26:58.406248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"806fa7adebd25b4b7a81a0a0d24533fa2a6b1049b5e8f10c2e4cc2384d491846"} Feb 17 00:26:58 crc kubenswrapper[4805]: E0217 00:26:58.409550 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:26:58 crc kubenswrapper[4805]: I0217 00:26:58.409801 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:27:00 crc kubenswrapper[4805]: E0217 00:27:00.551150 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894e111575d6694 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:26:58.059880084 +0000 UTC m=+244.075689482,LastTimestamp:2026-02-17 00:26:58.059880084 +0000 UTC m=+244.075689482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:27:01 crc kubenswrapper[4805]: E0217 00:27:01.319228 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="6.4s" Feb 17 00:27:03 crc kubenswrapper[4805]: I0217 00:27:03.785511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:03 crc kubenswrapper[4805]: I0217 00:27:03.786904 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:27:03 crc kubenswrapper[4805]: I0217 00:27:03.798092 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:03 crc kubenswrapper[4805]: I0217 00:27:03.798130 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:03 crc kubenswrapper[4805]: E0217 00:27:03.800613 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:03 crc kubenswrapper[4805]: I0217 00:27:03.801368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:03 crc kubenswrapper[4805]: W0217 00:27:03.821950 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7af81704aaf788c921b3667ec06b8f505e299b32396126e880747450337cbf16 WatchSource:0}: Error finding container 7af81704aaf788c921b3667ec06b8f505e299b32396126e880747450337cbf16: Status 404 returned error can't find the container with id 7af81704aaf788c921b3667ec06b8f505e299b32396126e880747450337cbf16 Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.444366 4805 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="62fa2c16e821245da8392e66413b67c16cfb879d073ec10bd593617ce647423d" exitCode=0 Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.444493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"62fa2c16e821245da8392e66413b67c16cfb879d073ec10bd593617ce647423d"} Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.444699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7af81704aaf788c921b3667ec06b8f505e299b32396126e880747450337cbf16"} Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.445084 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.445103 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:04 crc kubenswrapper[4805]: E0217 00:27:04.445860 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.446207 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.789190 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:27:04 crc kubenswrapper[4805]: I0217 00:27:04.789651 4805 status_manager.go:851] "Failed to get status for pod" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Feb 17 00:27:05 crc kubenswrapper[4805]: I0217 00:27:05.453223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5fbed6ff6be25a2bc6583b0a30202c0d9e48fd187ec8f0a9131c4b5a49d5d402"} Feb 17 00:27:05 crc kubenswrapper[4805]: I0217 00:27:05.453613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39146f7609805d47792b3f0b5a04e356184685c88ad7630f446bf9e197855f99"} Feb 17 00:27:05 crc kubenswrapper[4805]: I0217 00:27:05.453631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1da594b186aed631ea05147a470e1691c6ed9ce0cc872a25659839d2f7ccfe46"} Feb 17 00:27:05 crc kubenswrapper[4805]: I0217 00:27:05.453642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bdfe306341a726475097f02f89bcf287883fd0fe1df7ced75aafe80c7bc8f7f4"} Feb 17 00:27:06 crc kubenswrapper[4805]: I0217 00:27:06.472830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"846c741876b7c14719afce1d171b458cba44f0d0bb4b126787fa90f3f0f9188c"} Feb 17 00:27:06 crc kubenswrapper[4805]: I0217 00:27:06.473142 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:06 crc kubenswrapper[4805]: I0217 00:27:06.473299 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:06 crc kubenswrapper[4805]: I0217 00:27:06.473347 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:07 crc kubenswrapper[4805]: I0217 00:27:07.481180 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 00:27:07 crc kubenswrapper[4805]: I0217 00:27:07.481269 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c" exitCode=1 Feb 17 00:27:07 crc kubenswrapper[4805]: I0217 00:27:07.481320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c"} Feb 17 00:27:07 crc kubenswrapper[4805]: I0217 00:27:07.482061 4805 scope.go:117] "RemoveContainer" containerID="e5375e7f11d27969b40aab362f6a7d8a7a0516b8b803a33d21f4399de203205c" Feb 17 00:27:07 crc kubenswrapper[4805]: I0217 00:27:07.656756 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:27:08 crc kubenswrapper[4805]: I0217 00:27:08.492826 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 00:27:08 crc kubenswrapper[4805]: I0217 00:27:08.493178 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0819b84f12c44dfb236f9cdefb215ea44489e35cab956c11dc4336a2c331089b"} Feb 17 00:27:08 crc kubenswrapper[4805]: I0217 00:27:08.801642 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:08 crc kubenswrapper[4805]: I0217 00:27:08.802097 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:08 crc kubenswrapper[4805]: I0217 00:27:08.809742 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.014498 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.021018 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.484026 4805 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.512061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.512141 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.512174 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.519947 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.530847 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fd58f9a-5d32-40b7-888b-7c418fd3074b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:27:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:27:04Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:27:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:27:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdfe306341a726475097f02f89bcf287883fd0fe1df7ced75aafe80c7bc8f7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39146f7609805d47792b3f0b5a04e356184685c88ad7630f446bf9e197855f99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:27:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1da594b186aed631ea05147a470e1691c6ed9ce0cc872a25659839d2f7ccfe46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://846c741876b7c14719afce1d171b458cba44f0d0bb4b126787fa90f3f0f9188c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:27:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fbed6ff6be25a2bc6583b0a30202c0d9e48fd187ec8f0a9131c4b5a49d5d402\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:27:05Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62fa2c16e821245da8392e66413b67c16cfb879d073ec10bd593617ce647423d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62fa2c16e821245da8392e66413b67c16cfb879d073ec10bd593617ce647423d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:27:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"0fd58f9a-5d32-40b7-888b-7c418fd3074b\": field is immutable" Feb 17 00:27:11 crc kubenswrapper[4805]: I0217 00:27:11.570280 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="98b6f1f3-41e6-4cdf-b2db-01cd761784ea" Feb 17 00:27:12 crc kubenswrapper[4805]: I0217 00:27:12.517963 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:12 crc kubenswrapper[4805]: I0217 00:27:12.518009 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0fd58f9a-5d32-40b7-888b-7c418fd3074b" Feb 17 00:27:12 crc kubenswrapper[4805]: I0217 00:27:12.522504 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="98b6f1f3-41e6-4cdf-b2db-01cd761784ea" Feb 17 00:27:17 crc kubenswrapper[4805]: I0217 00:27:17.662407 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:27:18 crc kubenswrapper[4805]: I0217 00:27:18.064691 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 00:27:18 crc kubenswrapper[4805]: I0217 00:27:18.384006 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 00:27:18 crc kubenswrapper[4805]: I0217 00:27:18.484453 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 00:27:18 crc kubenswrapper[4805]: I0217 00:27:18.963094 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 00:27:19 crc kubenswrapper[4805]: I0217 00:27:19.094206 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 00:27:19 crc kubenswrapper[4805]: I0217 00:27:19.142518 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 00:27:19 crc kubenswrapper[4805]: I0217 00:27:19.474760 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 00:27:19 crc kubenswrapper[4805]: I0217 00:27:19.740385 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 00:27:19 crc kubenswrapper[4805]: I0217 00:27:19.881246 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 00:27:20 crc kubenswrapper[4805]: I0217 00:27:20.025704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 00:27:20 crc kubenswrapper[4805]: I0217 00:27:20.700916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 00:27:20 crc kubenswrapper[4805]: I0217 00:27:20.728045 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 00:27:20 crc kubenswrapper[4805]: I0217 00:27:20.758304 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 00:27:21 crc kubenswrapper[4805]: I0217 00:27:21.289594 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 00:27:21 crc kubenswrapper[4805]: I0217 00:27:21.380719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 00:27:21 crc kubenswrapper[4805]: I0217 00:27:21.780648 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 00:27:21 crc kubenswrapper[4805]: I0217 00:27:21.848053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.126647 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.596161 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.740151 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.755217 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.809855 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.895527 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 00:27:22 crc kubenswrapper[4805]: I0217 00:27:22.940621 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.011359 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.249699 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.265632 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.362575 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.624580 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.654850 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.961379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 00:27:23 crc kubenswrapper[4805]: I0217 00:27:23.975469 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 00:27:24 crc kubenswrapper[4805]: I0217 00:27:24.081079 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 00:27:24 crc kubenswrapper[4805]: I0217 00:27:24.121532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 00:27:24 crc kubenswrapper[4805]: I0217 00:27:24.402535 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 00:27:24 crc kubenswrapper[4805]: I0217 00:27:24.507384 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 00:27:25 crc kubenswrapper[4805]: I0217 00:27:25.596564 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 00:27:25 crc kubenswrapper[4805]: I0217 00:27:25.673371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 00:27:25 crc kubenswrapper[4805]: I0217 00:27:25.764560 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 00:27:25 crc kubenswrapper[4805]: I0217 00:27:25.888360 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.009260 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.104586 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.140426 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.151749 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.177727 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.235547 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.370542 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.372185 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.680864 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.768064 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.809317 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.829074 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 00:27:26 crc kubenswrapper[4805]: I0217 00:27:26.966105 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.141257 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.270394 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.316962 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.415092 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.490005 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.609156 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.701187 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.733446 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.781985 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.791223 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.805283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.849287 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 00:27:27 crc kubenswrapper[4805]: I0217 00:27:27.909148 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.012202 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.148310 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.166260 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.209575 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.278581 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.409209 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.444411 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.518348 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.581209 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.688426 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.693958 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.841279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.943250 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 00:27:28 crc kubenswrapper[4805]: I0217 00:27:28.969974 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.015377 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.059156 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.169629 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.203738 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.225137 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.243450 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.289347 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.303344 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.304403 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.311637 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.333004 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.353705 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.380234 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.400374 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.442416 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.452509 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.455304 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.457018 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.457066 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.461397 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.474972 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.474955886 podStartE2EDuration="18.474955886s" podCreationTimestamp="2026-02-17 00:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:27:29.471026911 +0000 UTC m=+275.486836329" watchObservedRunningTime="2026-02-17 00:27:29.474955886 +0000 UTC m=+275.490765284" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.567694 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.592056 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.624175 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.627462 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.718379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.792404 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 00:27:29 crc kubenswrapper[4805]: I0217 00:27:29.863278 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.059940 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.068841 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.077943 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.088463 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.147618 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.155360 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.176352 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.288966 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.339903 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.384428 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.387240 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.448854 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.480486 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.621213 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.748118 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 00:27:30 crc kubenswrapper[4805]: I0217 00:27:30.980742 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.093766 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.145050 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.211957 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.248659 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.302622 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.363751 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.398178 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.438284 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.449483 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.566062 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.648150 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.649206 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.734723 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.768555 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.805589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.832568 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.851093 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 00:27:31 crc kubenswrapper[4805]: I0217 00:27:31.969244 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.008945 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.037056 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.059482 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.167748 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.285836 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.288925 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.295499 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.390896 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.396098 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.472064 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.679816 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.757070 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.850190 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.857001 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.903184 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 00:27:32 crc kubenswrapper[4805]: I0217 00:27:32.949595 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.055777 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.072293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.197992 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.201229 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.259879 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.344294 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.429842 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.479279 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.484041 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.515608 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.674225 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.841805 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.922144 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.922510 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jg6vt" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="registry-server" containerID="cri-o://221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6" gracePeriod=30 Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.930405 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.931051 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r825t" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="registry-server" containerID="cri-o://0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622" gracePeriod=30 Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.962181 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.962476 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" containerID="cri-o://b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915" gracePeriod=30 Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.974570 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.974908 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcmd9" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="registry-server" containerID="cri-o://6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646" gracePeriod=30 Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.982213 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 00:27:33 crc kubenswrapper[4805]: I0217 00:27:33.983955 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.001428 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.001841 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7hfzb" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="registry-server" containerID="cri-o://857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac" gracePeriod=30 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.013761 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.013979 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef" gracePeriod=5 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.278900 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.307961 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.360383 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.370759 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.371825 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.375622 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.434370 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.463484 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484692 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content\") pod \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvlhq\" (UniqueName: \"kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq\") pod \"588d69d5-2637-42bf-a73a-d0f88ab29b83\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484772 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities\") pod \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content\") pod \"3f799a43-6325-4943-8c49-58ad9822eb77\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca\") pod \"b4b82891-39be-4580-8ec1-80e78114ca96\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics\") pod \"b4b82891-39be-4580-8ec1-80e78114ca96\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484864 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v966\" (UniqueName: \"kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966\") pod \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484882 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities\") pod \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484903 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities\") pod \"588d69d5-2637-42bf-a73a-d0f88ab29b83\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484931 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities\") pod \"3f799a43-6325-4943-8c49-58ad9822eb77\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69sf\" (UniqueName: \"kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf\") pod \"b4b82891-39be-4580-8ec1-80e78114ca96\" (UID: \"b4b82891-39be-4580-8ec1-80e78114ca96\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484968 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq7lx\" (UniqueName: \"kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx\") pod \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\" (UID: \"b09f5ed1-a921-4af2-abfe-e9066d9aa05e\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.484996 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4952v\" (UniqueName: \"kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v\") pod \"3f799a43-6325-4943-8c49-58ad9822eb77\" (UID: \"3f799a43-6325-4943-8c49-58ad9822eb77\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.485042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content\") pod \"588d69d5-2637-42bf-a73a-d0f88ab29b83\" (UID: \"588d69d5-2637-42bf-a73a-d0f88ab29b83\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.485061 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content\") pod \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\" (UID: \"ceb73aa9-1038-44da-adce-a56dddfbdaa0\") " Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.486536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b4b82891-39be-4580-8ec1-80e78114ca96" (UID: "b4b82891-39be-4580-8ec1-80e78114ca96"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.486524 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities" (OuterVolumeSpecName: "utilities") pod "588d69d5-2637-42bf-a73a-d0f88ab29b83" (UID: "588d69d5-2637-42bf-a73a-d0f88ab29b83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.486914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities" (OuterVolumeSpecName: "utilities") pod "b09f5ed1-a921-4af2-abfe-e9066d9aa05e" (UID: "b09f5ed1-a921-4af2-abfe-e9066d9aa05e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.488476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities" (OuterVolumeSpecName: "utilities") pod "3f799a43-6325-4943-8c49-58ad9822eb77" (UID: "3f799a43-6325-4943-8c49-58ad9822eb77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.491515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v" (OuterVolumeSpecName: "kube-api-access-4952v") pod "3f799a43-6325-4943-8c49-58ad9822eb77" (UID: "3f799a43-6325-4943-8c49-58ad9822eb77"). InnerVolumeSpecName "kube-api-access-4952v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.491605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b4b82891-39be-4580-8ec1-80e78114ca96" (UID: "b4b82891-39be-4580-8ec1-80e78114ca96"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.491656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966" (OuterVolumeSpecName: "kube-api-access-5v966") pod "ceb73aa9-1038-44da-adce-a56dddfbdaa0" (UID: "ceb73aa9-1038-44da-adce-a56dddfbdaa0"). InnerVolumeSpecName "kube-api-access-5v966". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.491924 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx" (OuterVolumeSpecName: "kube-api-access-kq7lx") pod "b09f5ed1-a921-4af2-abfe-e9066d9aa05e" (UID: "b09f5ed1-a921-4af2-abfe-e9066d9aa05e"). InnerVolumeSpecName "kube-api-access-kq7lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.492776 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq" (OuterVolumeSpecName: "kube-api-access-hvlhq") pod "588d69d5-2637-42bf-a73a-d0f88ab29b83" (UID: "588d69d5-2637-42bf-a73a-d0f88ab29b83"). InnerVolumeSpecName "kube-api-access-hvlhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.494177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf" (OuterVolumeSpecName: "kube-api-access-f69sf") pod "b4b82891-39be-4580-8ec1-80e78114ca96" (UID: "b4b82891-39be-4580-8ec1-80e78114ca96"). InnerVolumeSpecName "kube-api-access-f69sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.499546 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities" (OuterVolumeSpecName: "utilities") pod "ceb73aa9-1038-44da-adce-a56dddfbdaa0" (UID: "ceb73aa9-1038-44da-adce-a56dddfbdaa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.536107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f799a43-6325-4943-8c49-58ad9822eb77" (UID: "3f799a43-6325-4943-8c49-58ad9822eb77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.551136 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b09f5ed1-a921-4af2-abfe-e9066d9aa05e" (UID: "b09f5ed1-a921-4af2-abfe-e9066d9aa05e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.567052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "588d69d5-2637-42bf-a73a-d0f88ab29b83" (UID: "588d69d5-2637-42bf-a73a-d0f88ab29b83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586227 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586257 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586268 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v966\" (UniqueName: \"kubernetes.io/projected/ceb73aa9-1038-44da-adce-a56dddfbdaa0-kube-api-access-5v966\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586278 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586286 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586296 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69sf\" (UniqueName: \"kubernetes.io/projected/b4b82891-39be-4580-8ec1-80e78114ca96-kube-api-access-f69sf\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586305 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq7lx\" (UniqueName: \"kubernetes.io/projected/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-kube-api-access-kq7lx\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586314 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4952v\" (UniqueName: \"kubernetes.io/projected/3f799a43-6325-4943-8c49-58ad9822eb77-kube-api-access-4952v\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586337 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/588d69d5-2637-42bf-a73a-d0f88ab29b83-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586346 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09f5ed1-a921-4af2-abfe-e9066d9aa05e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586355 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvlhq\" (UniqueName: \"kubernetes.io/projected/588d69d5-2637-42bf-a73a-d0f88ab29b83-kube-api-access-hvlhq\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586363 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586373 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4b82891-39be-4580-8ec1-80e78114ca96-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.586381 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f799a43-6325-4943-8c49-58ad9822eb77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.595265 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.601450 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.616678 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ceb73aa9-1038-44da-adce-a56dddfbdaa0" (UID: "ceb73aa9-1038-44da-adce-a56dddfbdaa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.656481 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.675766 4805 generic.go:334] "Generic (PLEG): container finished" podID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerID="0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622" exitCode=0 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.675841 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r825t" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.675873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerDied","Data":"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.675933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r825t" event={"ID":"588d69d5-2637-42bf-a73a-d0f88ab29b83","Type":"ContainerDied","Data":"35cc5ec1dcc48f79e0dff05053d93f9a8d66a1cef7000d9ab472f7a9405b226b"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.675957 4805 scope.go:117] "RemoveContainer" containerID="0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.678510 4805 generic.go:334] "Generic (PLEG): container finished" podID="3f799a43-6325-4943-8c49-58ad9822eb77" containerID="6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646" exitCode=0 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.678577 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcmd9" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.678586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerDied","Data":"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.678628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcmd9" event={"ID":"3f799a43-6325-4943-8c49-58ad9822eb77","Type":"ContainerDied","Data":"5dc113bbe851b603aed9ca66e739cd07837f01ec56ec25a17e921addba56e243"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.683356 4805 generic.go:334] "Generic (PLEG): container finished" podID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerID="221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6" exitCode=0 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.683451 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jg6vt" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.683491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerDied","Data":"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.683524 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jg6vt" event={"ID":"b09f5ed1-a921-4af2-abfe-e9066d9aa05e","Type":"ContainerDied","Data":"0a186dbaf0e3c415867b1eb078026847cb8d1dd66c75920663ccdbb945c7759f"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.687241 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb73aa9-1038-44da-adce-a56dddfbdaa0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.691197 4805 generic.go:334] "Generic (PLEG): container finished" podID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerID="857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac" exitCode=0 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.691277 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerDied","Data":"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.691307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hfzb" event={"ID":"ceb73aa9-1038-44da-adce-a56dddfbdaa0","Type":"ContainerDied","Data":"6cc45798fe0e7e7e0c769e7710cf24476cc70035e933cfa03231f733d3436917"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.691405 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hfzb" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.698842 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4b82891-39be-4580-8ec1-80e78114ca96" containerID="b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915" exitCode=0 Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.698912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" event={"ID":"b4b82891-39be-4580-8ec1-80e78114ca96","Type":"ContainerDied","Data":"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.698941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" event={"ID":"b4b82891-39be-4580-8ec1-80e78114ca96","Type":"ContainerDied","Data":"fc434222df0178ef43b3d5a6aad5d0fb5fe6e24dace7fee64b01ec886aaf18a5"} Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.699441 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9lrgh" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.700394 4805 scope.go:117] "RemoveContainer" containerID="2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.719175 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.722221 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.724802 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.725730 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.727957 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r825t"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.732831 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.736303 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcmd9"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.741240 4805 scope.go:117] "RemoveContainer" containerID="bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.741862 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.744926 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jg6vt"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.756017 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.760886 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.762176 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9lrgh"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.762611 4805 scope.go:117] "RemoveContainer" containerID="0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.763163 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622\": container with ID starting with 0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622 not found: ID does not exist" containerID="0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.763258 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622"} err="failed to get container status \"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622\": rpc error: code = NotFound desc = could not find container \"0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622\": container with ID starting with 0e29611936185c961e00eb585127e82a13340193f6aebd39ed85856771dd7622 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.763362 4805 scope.go:117] "RemoveContainer" containerID="2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.763815 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366\": container with ID starting with 2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366 not found: ID does not exist" containerID="2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.763854 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366"} err="failed to get container status \"2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366\": rpc error: code = NotFound desc = could not find container \"2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366\": container with ID starting with 2df4866e6b1f589f9b5aaa4f49fba6f67ab5043e8a492344eb0029a8f7ae1366 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.763879 4805 scope.go:117] "RemoveContainer" containerID="bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.764211 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94\": container with ID starting with bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94 not found: ID does not exist" containerID="bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.764362 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94"} err="failed to get container status \"bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94\": rpc error: code = NotFound desc = could not find container \"bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94\": container with ID starting with bad7e7a7eff806809785be7e6b9634d7e6be03ce6b4836ebc0f9bea339cb6b94 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.764469 4805 scope.go:117] "RemoveContainer" containerID="6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.765093 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.767709 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7hfzb"] Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.776475 4805 scope.go:117] "RemoveContainer" containerID="0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.788089 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.793806 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" path="/var/lib/kubelet/pods/3f799a43-6325-4943-8c49-58ad9822eb77/volumes" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.794204 4805 scope.go:117] "RemoveContainer" containerID="3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.794873 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" path="/var/lib/kubelet/pods/588d69d5-2637-42bf-a73a-d0f88ab29b83/volumes" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.795642 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" path="/var/lib/kubelet/pods/b09f5ed1-a921-4af2-abfe-e9066d9aa05e/volumes" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.796793 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" path="/var/lib/kubelet/pods/b4b82891-39be-4580-8ec1-80e78114ca96/volumes" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.797293 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" path="/var/lib/kubelet/pods/ceb73aa9-1038-44da-adce-a56dddfbdaa0/volumes" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.807662 4805 scope.go:117] "RemoveContainer" containerID="6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.808063 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646\": container with ID starting with 6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646 not found: ID does not exist" containerID="6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.808104 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646"} err="failed to get container status \"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646\": rpc error: code = NotFound desc = could not find container \"6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646\": container with ID starting with 6621acd300b9570a053868efb548f6ae6ef3bba701cd68c606b4b4e988eb7646 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.808133 4805 scope.go:117] "RemoveContainer" containerID="0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.808443 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c\": container with ID starting with 0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c not found: ID does not exist" containerID="0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.808467 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c"} err="failed to get container status \"0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c\": rpc error: code = NotFound desc = could not find container \"0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c\": container with ID starting with 0c0a77cc239b483594c0f9205938dd72b0e6619bda4422206e618e5ad064b55c not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.808484 4805 scope.go:117] "RemoveContainer" containerID="3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.808720 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb\": container with ID starting with 3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb not found: ID does not exist" containerID="3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.808804 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb"} err="failed to get container status \"3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb\": rpc error: code = NotFound desc = could not find container \"3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb\": container with ID starting with 3ffed4a9c4d0136ebc40f521a5a0e74d22089ae11fbefa9999980a96c07fd6fb not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.810139 4805 scope.go:117] "RemoveContainer" containerID="221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.823415 4805 scope.go:117] "RemoveContainer" containerID="fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.842607 4805 scope.go:117] "RemoveContainer" containerID="72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.869687 4805 scope.go:117] "RemoveContainer" containerID="221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.870430 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6\": container with ID starting with 221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6 not found: ID does not exist" containerID="221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.870482 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6"} err="failed to get container status \"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6\": rpc error: code = NotFound desc = could not find container \"221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6\": container with ID starting with 221def014b3f7b7d8ca8a749bc3bd412fd710f4c24ab402cf1f35eecdd02afc6 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.870518 4805 scope.go:117] "RemoveContainer" containerID="fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.870912 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23\": container with ID starting with fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23 not found: ID does not exist" containerID="fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.871084 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23"} err="failed to get container status \"fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23\": rpc error: code = NotFound desc = could not find container \"fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23\": container with ID starting with fef6024826c7a851490a951fb373ab51a5d29a416d9bfaebaba555ecca340b23 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.871216 4805 scope.go:117] "RemoveContainer" containerID="72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.871623 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55\": container with ID starting with 72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55 not found: ID does not exist" containerID="72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.871771 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55"} err="failed to get container status \"72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55\": rpc error: code = NotFound desc = could not find container \"72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55\": container with ID starting with 72c3750c185f070e25272b1f866d596ef65293cf923a2c00437c824c640dca55 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.871901 4805 scope.go:117] "RemoveContainer" containerID="857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.885367 4805 scope.go:117] "RemoveContainer" containerID="01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.908403 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.910798 4805 scope.go:117] "RemoveContainer" containerID="e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.931906 4805 scope.go:117] "RemoveContainer" containerID="857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.933016 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac\": container with ID starting with 857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac not found: ID does not exist" containerID="857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.933055 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac"} err="failed to get container status \"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac\": rpc error: code = NotFound desc = could not find container \"857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac\": container with ID starting with 857a75837fca226195e1f2a2bc72846d30294b522bef5be4910e8a67e8171fac not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.933082 4805 scope.go:117] "RemoveContainer" containerID="01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.933616 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224\": container with ID starting with 01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224 not found: ID does not exist" containerID="01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.933778 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224"} err="failed to get container status \"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224\": rpc error: code = NotFound desc = could not find container \"01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224\": container with ID starting with 01e8776883de005e695ee4daa702d9087c9b4ab3214a08e89d38a89e990a2224 not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.933900 4805 scope.go:117] "RemoveContainer" containerID="e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.934290 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e\": container with ID starting with e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e not found: ID does not exist" containerID="e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.934342 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e"} err="failed to get container status \"e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e\": rpc error: code = NotFound desc = could not find container \"e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e\": container with ID starting with e39a51a759245d403742723f6e3e701275516948c62ff5b0a4b71350ea8e918e not found: ID does not exist" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.934362 4805 scope.go:117] "RemoveContainer" containerID="b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.950644 4805 scope.go:117] "RemoveContainer" containerID="b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915" Feb 17 00:27:34 crc kubenswrapper[4805]: E0217 00:27:34.951303 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915\": container with ID starting with b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915 not found: ID does not exist" containerID="b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915" Feb 17 00:27:34 crc kubenswrapper[4805]: I0217 00:27:34.951356 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915"} err="failed to get container status \"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915\": rpc error: code = NotFound desc = could not find container \"b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915\": container with ID starting with b95e00fe69757e3b8f2bd1ce088ad3c718bb3cfb0c7ed2a40255296de5368915 not found: ID does not exist" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.059103 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.147422 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.161621 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.177536 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.193807 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.194403 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.209394 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.225076 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.292294 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.497512 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.685558 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.687590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.718483 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.719226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.725100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.812115 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.833212 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.886264 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.905598 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.909845 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 00:27:35 crc kubenswrapper[4805]: I0217 00:27:35.917207 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.022656 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.076788 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.095054 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.107445 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.247684 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.279870 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.445490 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.572366 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.752744 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.873844 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.875806 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.880830 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.886392 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nv6ks"] Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.886804 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.886846 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.886891 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.886913 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.886941 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.886961 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.886981 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.886996 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887021 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887038 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887058 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887073 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887091 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887108 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887138 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887153 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="extract-utilities" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887176 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887192 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887214 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887229 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887250 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887265 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887288 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" containerName="installer" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887303 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" containerName="installer" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887358 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887377 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887426 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887448 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: E0217 00:27:36.887474 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887491 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="extract-content" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887697 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09f5ed1-a921-4af2-abfe-e9066d9aa05e" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887723 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f799a43-6325-4943-8c49-58ad9822eb77" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887752 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887773 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="142f7345-c74d-4880-8c0e-ca32d39e9d78" containerName="installer" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887794 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="588d69d5-2637-42bf-a73a-d0f88ab29b83" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887812 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b82891-39be-4580-8ec1-80e78114ca96" containerName="marketplace-operator" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.887843 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb73aa9-1038-44da-adce-a56dddfbdaa0" containerName="registry-server" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.888621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.897203 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.897639 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.897975 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.898297 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.906591 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 00:27:36 crc kubenswrapper[4805]: I0217 00:27:36.913021 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nv6ks"] Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.008643 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.009283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.018073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxlf\" (UniqueName: \"kubernetes.io/projected/e80d2a1c-4272-4797-bf0c-03b011ed297f-kube-api-access-ksxlf\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.018200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.018251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.119833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxlf\" (UniqueName: \"kubernetes.io/projected/e80d2a1c-4272-4797-bf0c-03b011ed297f-kube-api-access-ksxlf\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.119968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.120018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.122522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.126284 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e80d2a1c-4272-4797-bf0c-03b011ed297f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.138473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxlf\" (UniqueName: \"kubernetes.io/projected/e80d2a1c-4272-4797-bf0c-03b011ed297f-kube-api-access-ksxlf\") pod \"marketplace-operator-79b997595-nv6ks\" (UID: \"e80d2a1c-4272-4797-bf0c-03b011ed297f\") " pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.154927 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.199678 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.226672 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.374188 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.439216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nv6ks"] Feb 17 00:27:37 crc kubenswrapper[4805]: W0217 00:27:37.448503 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode80d2a1c_4272_4797_bf0c_03b011ed297f.slice/crio-3611e7e2180ffbd261c0e13c848241b0da99e1fd54bcc8681b37fbdea97b4723 WatchSource:0}: Error finding container 3611e7e2180ffbd261c0e13c848241b0da99e1fd54bcc8681b37fbdea97b4723: Status 404 returned error can't find the container with id 3611e7e2180ffbd261c0e13c848241b0da99e1fd54bcc8681b37fbdea97b4723 Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.539429 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.551753 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.576203 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.709265 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.722654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" event={"ID":"e80d2a1c-4272-4797-bf0c-03b011ed297f","Type":"ContainerStarted","Data":"4990a9dc732b3bf56c8d5caf1bed413039a5d256ce1b4750490dcb44eb26a8be"} Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.722704 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" event={"ID":"e80d2a1c-4272-4797-bf0c-03b011ed297f","Type":"ContainerStarted","Data":"3611e7e2180ffbd261c0e13c848241b0da99e1fd54bcc8681b37fbdea97b4723"} Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.722878 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.724172 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nv6ks container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.724219 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" podUID="e80d2a1c-4272-4797-bf0c-03b011ed297f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Feb 17 00:27:37 crc kubenswrapper[4805]: I0217 00:27:37.916083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.132228 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.166556 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.177415 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.423266 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.463943 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.505724 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.548154 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.611665 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.712550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.733805 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.760745 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nv6ks" podStartSLOduration=5.760720619 podStartE2EDuration="5.760720619s" podCreationTimestamp="2026-02-17 00:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:27:37.744638893 +0000 UTC m=+283.760448291" watchObservedRunningTime="2026-02-17 00:27:38.760720619 +0000 UTC m=+284.776530017" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.792802 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.899277 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 00:27:38 crc kubenswrapper[4805]: I0217 00:27:38.943561 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.086129 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.362416 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.458206 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.495619 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.551693 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.585700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.585811 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.593672 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.736800 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.736859 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef" exitCode=137 Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.736965 4805 scope.go:117] "RemoveContainer" containerID="72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.737080 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.749670 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.749886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750028 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750148 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750264 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750471 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750574 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750694 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.750923 4805 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.751016 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.751095 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.751182 4805 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.751662 4805 scope.go:117] "RemoveContainer" containerID="72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef" Feb 17 00:27:39 crc kubenswrapper[4805]: E0217 00:27:39.752064 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef\": container with ID starting with 72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef not found: ID does not exist" containerID="72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.752178 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef"} err="failed to get container status \"72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef\": rpc error: code = NotFound desc = could not find container \"72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef\": container with ID starting with 72d84bf8848a67308337c0154dd36bfe4ec8d4a9db63d558e05b2de8350d85ef not found: ID does not exist" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.760591 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.780258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.810101 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 00:27:39 crc kubenswrapper[4805]: I0217 00:27:39.852954 4805 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:40 crc kubenswrapper[4805]: I0217 00:27:40.335703 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 00:27:40 crc kubenswrapper[4805]: I0217 00:27:40.547381 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 00:27:40 crc kubenswrapper[4805]: I0217 00:27:40.792116 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 00:27:40 crc kubenswrapper[4805]: I0217 00:27:40.824884 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 00:27:41 crc kubenswrapper[4805]: I0217 00:27:41.205740 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 00:27:54 crc kubenswrapper[4805]: I0217 00:27:54.545347 4805 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.456658 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.457151 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerName="controller-manager" containerID="cri-o://af208842d60974bce121cbc7b17e4972ad7bdd0850414acab651f14854c685bf" gracePeriod=30 Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.545737 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.545936 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" podUID="68bd2261-de7d-47ae-a688-59fa77073077" containerName="route-controller-manager" containerID="cri-o://861596bbab028c22deb93c7ba6a4acd2a7f5960698794a942c8cf431e2ddb6f7" gracePeriod=30 Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.839995 4805 generic.go:334] "Generic (PLEG): container finished" podID="68bd2261-de7d-47ae-a688-59fa77073077" containerID="861596bbab028c22deb93c7ba6a4acd2a7f5960698794a942c8cf431e2ddb6f7" exitCode=0 Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.840097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" event={"ID":"68bd2261-de7d-47ae-a688-59fa77073077","Type":"ContainerDied","Data":"861596bbab028c22deb93c7ba6a4acd2a7f5960698794a942c8cf431e2ddb6f7"} Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.841943 4805 generic.go:334] "Generic (PLEG): container finished" podID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerID="af208842d60974bce121cbc7b17e4972ad7bdd0850414acab651f14854c685bf" exitCode=0 Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.841976 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" event={"ID":"34ca278b-8fb7-4658-a073-e8aefda92bed","Type":"ContainerDied","Data":"af208842d60974bce121cbc7b17e4972ad7bdd0850414acab651f14854c685bf"} Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.916587 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.922299 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985638 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config\") pod \"68bd2261-de7d-47ae-a688-59fa77073077\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985695 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkl2\" (UniqueName: \"kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2\") pod \"34ca278b-8fb7-4658-a073-e8aefda92bed\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985739 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config\") pod \"34ca278b-8fb7-4658-a073-e8aefda92bed\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w5dq\" (UniqueName: \"kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq\") pod \"68bd2261-de7d-47ae-a688-59fa77073077\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985799 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert\") pod \"34ca278b-8fb7-4658-a073-e8aefda92bed\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985833 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca\") pod \"34ca278b-8fb7-4658-a073-e8aefda92bed\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles\") pod \"34ca278b-8fb7-4658-a073-e8aefda92bed\" (UID: \"34ca278b-8fb7-4658-a073-e8aefda92bed\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca\") pod \"68bd2261-de7d-47ae-a688-59fa77073077\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.985898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert\") pod \"68bd2261-de7d-47ae-a688-59fa77073077\" (UID: \"68bd2261-de7d-47ae-a688-59fa77073077\") " Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.987826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "34ca278b-8fb7-4658-a073-e8aefda92bed" (UID: "34ca278b-8fb7-4658-a073-e8aefda92bed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.987865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config" (OuterVolumeSpecName: "config") pod "34ca278b-8fb7-4658-a073-e8aefda92bed" (UID: "34ca278b-8fb7-4658-a073-e8aefda92bed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.987876 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca" (OuterVolumeSpecName: "client-ca") pod "34ca278b-8fb7-4658-a073-e8aefda92bed" (UID: "34ca278b-8fb7-4658-a073-e8aefda92bed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.987972 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca" (OuterVolumeSpecName: "client-ca") pod "68bd2261-de7d-47ae-a688-59fa77073077" (UID: "68bd2261-de7d-47ae-a688-59fa77073077"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.987994 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config" (OuterVolumeSpecName: "config") pod "68bd2261-de7d-47ae-a688-59fa77073077" (UID: "68bd2261-de7d-47ae-a688-59fa77073077"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.991683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34ca278b-8fb7-4658-a073-e8aefda92bed" (UID: "34ca278b-8fb7-4658-a073-e8aefda92bed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.991698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2" (OuterVolumeSpecName: "kube-api-access-pfkl2") pod "34ca278b-8fb7-4658-a073-e8aefda92bed" (UID: "34ca278b-8fb7-4658-a073-e8aefda92bed"). InnerVolumeSpecName "kube-api-access-pfkl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.991734 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq" (OuterVolumeSpecName: "kube-api-access-7w5dq") pod "68bd2261-de7d-47ae-a688-59fa77073077" (UID: "68bd2261-de7d-47ae-a688-59fa77073077"). InnerVolumeSpecName "kube-api-access-7w5dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:27:57 crc kubenswrapper[4805]: I0217 00:27:57.992275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68bd2261-de7d-47ae-a688-59fa77073077" (UID: "68bd2261-de7d-47ae-a688-59fa77073077"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087581 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkl2\" (UniqueName: \"kubernetes.io/projected/34ca278b-8fb7-4658-a073-e8aefda92bed-kube-api-access-pfkl2\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087615 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087626 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w5dq\" (UniqueName: \"kubernetes.io/projected/68bd2261-de7d-47ae-a688-59fa77073077-kube-api-access-7w5dq\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087636 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34ca278b-8fb7-4658-a073-e8aefda92bed-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087645 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087652 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34ca278b-8fb7-4658-a073-e8aefda92bed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087660 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087668 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68bd2261-de7d-47ae-a688-59fa77073077-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.087678 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68bd2261-de7d-47ae-a688-59fa77073077-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.666115 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:27:58 crc kubenswrapper[4805]: E0217 00:27:58.666741 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerName="controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.666757 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerName="controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: E0217 00:27:58.666771 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68bd2261-de7d-47ae-a688-59fa77073077" containerName="route-controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.666780 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="68bd2261-de7d-47ae-a688-59fa77073077" containerName="route-controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.666908 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" containerName="controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.666921 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="68bd2261-de7d-47ae-a688-59fa77073077" containerName="route-controller-manager" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.667849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.677320 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.678297 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.692678 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.695159 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.795131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.795189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.795233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.795270 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.796904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqvqw\" (UniqueName: \"kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.797073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.797203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58tx\" (UniqueName: \"kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.797298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.797371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.848953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" event={"ID":"34ca278b-8fb7-4658-a073-e8aefda92bed","Type":"ContainerDied","Data":"a30789f092088fc2497aaec3c78d7d774e6241028f37f4afd6356f887835ebdd"} Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.848975 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lst4d" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.849012 4805 scope.go:117] "RemoveContainer" containerID="af208842d60974bce121cbc7b17e4972ad7bdd0850414acab651f14854c685bf" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.851080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" event={"ID":"68bd2261-de7d-47ae-a688-59fa77073077","Type":"ContainerDied","Data":"d8f2db17c779db6734e78f8adb7ab9fa1ae4bb6419b4b5d730289b3e34c17d14"} Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.851149 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.867482 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.871654 4805 scope.go:117] "RemoveContainer" containerID="861596bbab028c22deb93c7ba6a4acd2a7f5960698794a942c8cf431e2ddb6f7" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.871676 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xvrjn"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.887369 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.893689 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lst4d"] Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.898889 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.898926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.898959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.898980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.899001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqvqw\" (UniqueName: \"kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.899020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.899036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58tx\" (UniqueName: \"kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.899068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.899091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.901138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.901747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.901388 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.901818 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.901888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.903030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.907189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.915860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqvqw\" (UniqueName: \"kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw\") pod \"controller-manager-847cbf85fb-zkkrl\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:58 crc kubenswrapper[4805]: I0217 00:27:58.917185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58tx\" (UniqueName: \"kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx\") pod \"route-controller-manager-6f97766788-wnx64\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.000553 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.017702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.239298 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:27:59 crc kubenswrapper[4805]: W0217 00:27:59.245414 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b008e37_2f91_4af6_8719_44eb68027086.slice/crio-e9ef3c76f95c7dd5537591ea9eed8fe5c5f8b27a5ae984f06a12cdc42e1cf6ff WatchSource:0}: Error finding container e9ef3c76f95c7dd5537591ea9eed8fe5c5f8b27a5ae984f06a12cdc42e1cf6ff: Status 404 returned error can't find the container with id e9ef3c76f95c7dd5537591ea9eed8fe5c5f8b27a5ae984f06a12cdc42e1cf6ff Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.408902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:27:59 crc kubenswrapper[4805]: W0217 00:27:59.413406 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71a32cb3_41c6_4b9e_baa9_fb1de47bba18.slice/crio-4d07ff64bebc9a4088169e7da119c63bd724d8887cc862c831e3a85c2a6e6486 WatchSource:0}: Error finding container 4d07ff64bebc9a4088169e7da119c63bd724d8887cc862c831e3a85c2a6e6486: Status 404 returned error can't find the container with id 4d07ff64bebc9a4088169e7da119c63bd724d8887cc862c831e3a85c2a6e6486 Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.860753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" event={"ID":"71a32cb3-41c6-4b9e-baa9-fb1de47bba18","Type":"ContainerStarted","Data":"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47"} Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.862063 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.862174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" event={"ID":"71a32cb3-41c6-4b9e-baa9-fb1de47bba18","Type":"ContainerStarted","Data":"4d07ff64bebc9a4088169e7da119c63bd724d8887cc862c831e3a85c2a6e6486"} Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.865387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" event={"ID":"1b008e37-2f91-4af6-8719-44eb68027086","Type":"ContainerStarted","Data":"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead"} Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.865423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" event={"ID":"1b008e37-2f91-4af6-8719-44eb68027086","Type":"ContainerStarted","Data":"e9ef3c76f95c7dd5537591ea9eed8fe5c5f8b27a5ae984f06a12cdc42e1cf6ff"} Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.865672 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.872126 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.886661 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" podStartSLOduration=2.886639917 podStartE2EDuration="2.886639917s" podCreationTimestamp="2026-02-17 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:27:59.882776684 +0000 UTC m=+305.898586122" watchObservedRunningTime="2026-02-17 00:27:59.886639917 +0000 UTC m=+305.902449315" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.906954 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" podStartSLOduration=2.906930221 podStartE2EDuration="2.906930221s" podCreationTimestamp="2026-02-17 00:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:27:59.90314823 +0000 UTC m=+305.918957638" watchObservedRunningTime="2026-02-17 00:27:59.906930221 +0000 UTC m=+305.922739659" Feb 17 00:27:59 crc kubenswrapper[4805]: I0217 00:27:59.980548 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:28:00 crc kubenswrapper[4805]: I0217 00:28:00.796086 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ca278b-8fb7-4658-a073-e8aefda92bed" path="/var/lib/kubelet/pods/34ca278b-8fb7-4658-a073-e8aefda92bed/volumes" Feb 17 00:28:00 crc kubenswrapper[4805]: I0217 00:28:00.797747 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68bd2261-de7d-47ae-a688-59fa77073077" path="/var/lib/kubelet/pods/68bd2261-de7d-47ae-a688-59fa77073077/volumes" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.096423 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.096628 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" podUID="1b008e37-2f91-4af6-8719-44eb68027086" containerName="controller-manager" containerID="cri-o://d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead" gracePeriod=30 Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.121651 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.121890 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" podUID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" containerName="route-controller-manager" containerID="cri-o://652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47" gracePeriod=30 Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.588543 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.648137 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.693971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58tx\" (UniqueName: \"kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx\") pod \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694049 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert\") pod \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694099 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config\") pod \"1b008e37-2f91-4af6-8719-44eb68027086\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca\") pod \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca\") pod \"1b008e37-2f91-4af6-8719-44eb68027086\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694221 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles\") pod \"1b008e37-2f91-4af6-8719-44eb68027086\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694257 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config\") pod \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\" (UID: \"71a32cb3-41c6-4b9e-baa9-fb1de47bba18\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert\") pod \"1b008e37-2f91-4af6-8719-44eb68027086\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.694355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqvqw\" (UniqueName: \"kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw\") pod \"1b008e37-2f91-4af6-8719-44eb68027086\" (UID: \"1b008e37-2f91-4af6-8719-44eb68027086\") " Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.695032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config" (OuterVolumeSpecName: "config") pod "71a32cb3-41c6-4b9e-baa9-fb1de47bba18" (UID: "71a32cb3-41c6-4b9e-baa9-fb1de47bba18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.695420 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config" (OuterVolumeSpecName: "config") pod "1b008e37-2f91-4af6-8719-44eb68027086" (UID: "1b008e37-2f91-4af6-8719-44eb68027086"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.695488 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca" (OuterVolumeSpecName: "client-ca") pod "71a32cb3-41c6-4b9e-baa9-fb1de47bba18" (UID: "71a32cb3-41c6-4b9e-baa9-fb1de47bba18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.695647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1b008e37-2f91-4af6-8719-44eb68027086" (UID: "1b008e37-2f91-4af6-8719-44eb68027086"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.695827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b008e37-2f91-4af6-8719-44eb68027086" (UID: "1b008e37-2f91-4af6-8719-44eb68027086"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.699918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "71a32cb3-41c6-4b9e-baa9-fb1de47bba18" (UID: "71a32cb3-41c6-4b9e-baa9-fb1de47bba18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.700129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b008e37-2f91-4af6-8719-44eb68027086" (UID: "1b008e37-2f91-4af6-8719-44eb68027086"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.700224 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw" (OuterVolumeSpecName: "kube-api-access-mqvqw") pod "1b008e37-2f91-4af6-8719-44eb68027086" (UID: "1b008e37-2f91-4af6-8719-44eb68027086"). InnerVolumeSpecName "kube-api-access-mqvqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.701462 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx" (OuterVolumeSpecName: "kube-api-access-n58tx") pod "71a32cb3-41c6-4b9e-baa9-fb1de47bba18" (UID: "71a32cb3-41c6-4b9e-baa9-fb1de47bba18"). InnerVolumeSpecName "kube-api-access-n58tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795868 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b008e37-2f91-4af6-8719-44eb68027086-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795926 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqvqw\" (UniqueName: \"kubernetes.io/projected/1b008e37-2f91-4af6-8719-44eb68027086-kube-api-access-mqvqw\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795946 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58tx\" (UniqueName: \"kubernetes.io/projected/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-kube-api-access-n58tx\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795961 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795976 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.795989 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.796002 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.796045 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b008e37-2f91-4af6-8719-44eb68027086-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.796061 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71a32cb3-41c6-4b9e-baa9-fb1de47bba18-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.901807 4805 generic.go:334] "Generic (PLEG): container finished" podID="1b008e37-2f91-4af6-8719-44eb68027086" containerID="d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead" exitCode=0 Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.901861 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.901897 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" event={"ID":"1b008e37-2f91-4af6-8719-44eb68027086","Type":"ContainerDied","Data":"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead"} Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.901928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847cbf85fb-zkkrl" event={"ID":"1b008e37-2f91-4af6-8719-44eb68027086","Type":"ContainerDied","Data":"e9ef3c76f95c7dd5537591ea9eed8fe5c5f8b27a5ae984f06a12cdc42e1cf6ff"} Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.901972 4805 scope.go:117] "RemoveContainer" containerID="d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.904373 4805 generic.go:334] "Generic (PLEG): container finished" podID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" containerID="652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47" exitCode=0 Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.904401 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" event={"ID":"71a32cb3-41c6-4b9e-baa9-fb1de47bba18","Type":"ContainerDied","Data":"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47"} Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.904419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" event={"ID":"71a32cb3-41c6-4b9e-baa9-fb1de47bba18","Type":"ContainerDied","Data":"4d07ff64bebc9a4088169e7da119c63bd724d8887cc862c831e3a85c2a6e6486"} Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.904454 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.918283 4805 scope.go:117] "RemoveContainer" containerID="d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead" Feb 17 00:28:05 crc kubenswrapper[4805]: E0217 00:28:05.918692 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead\": container with ID starting with d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead not found: ID does not exist" containerID="d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.918728 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead"} err="failed to get container status \"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead\": rpc error: code = NotFound desc = could not find container \"d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead\": container with ID starting with d5749d13f52a08054f632b14a50922adb7e15dd17039667a80cb6f2dca9c3ead not found: ID does not exist" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.918753 4805 scope.go:117] "RemoveContainer" containerID="652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.932016 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.932438 4805 scope.go:117] "RemoveContainer" containerID="652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47" Feb 17 00:28:05 crc kubenswrapper[4805]: E0217 00:28:05.932801 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47\": container with ID starting with 652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47 not found: ID does not exist" containerID="652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.932832 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47"} err="failed to get container status \"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47\": rpc error: code = NotFound desc = could not find container \"652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47\": container with ID starting with 652a0500542853500ca12e30c33685d160fc82e4337762894f37e5442c887c47 not found: ID does not exist" Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.935345 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-847cbf85fb-zkkrl"] Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.944373 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:28:05 crc kubenswrapper[4805]: I0217 00:28:05.948872 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f97766788-wnx64"] Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.672860 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:06 crc kubenswrapper[4805]: E0217 00:28:06.673133 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" containerName="route-controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.673153 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" containerName="route-controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: E0217 00:28:06.673171 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b008e37-2f91-4af6-8719-44eb68027086" containerName="controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.673183 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b008e37-2f91-4af6-8719-44eb68027086" containerName="controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.673354 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b008e37-2f91-4af6-8719-44eb68027086" containerName="controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.673375 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" containerName="route-controller-manager" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.675630 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.684815 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.685108 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.685519 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.685663 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.685761 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.685534 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.686037 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.687389 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.701417 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.701639 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.705739 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.705895 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.706450 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.709781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.710394 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.710600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.710683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.710719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wst8w\" (UniqueName: \"kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.717387 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.721466 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.727387 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.792584 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b008e37-2f91-4af6-8719-44eb68027086" path="/var/lib/kubelet/pods/1b008e37-2f91-4af6-8719-44eb68027086/volumes" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.793520 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a32cb3-41c6-4b9e-baa9-fb1de47bba18" path="/var/lib/kubelet/pods/71a32cb3-41c6-4b9e-baa9-fb1de47bba18/volumes" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.811860 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.811932 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812014 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wst8w\" (UniqueName: \"kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812093 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwht\" (UniqueName: \"kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812140 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.812164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.813987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.816214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.817941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.842391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wst8w\" (UniqueName: \"kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w\") pod \"route-controller-manager-7cb867f77-8tzld\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.892410 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77nqw"] Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.893541 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.896723 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.912959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.913003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.913031 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.913067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.913104 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwht\" (UniqueName: \"kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.914004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.914712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.915011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.919206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.934766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwht\" (UniqueName: \"kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht\") pod \"controller-manager-69fc8cbf9c-h7cqb\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:06 crc kubenswrapper[4805]: I0217 00:28:06.949031 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77nqw"] Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.012009 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.015095 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-utilities\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.015189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-catalog-content\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.015350 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbx8\" (UniqueName: \"kubernetes.io/projected/34af9a3c-a732-4e70-b52a-abc52c108a33-kube-api-access-9hbx8\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.024764 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.094667 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nzjp8"] Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.095828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.098879 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.108219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzjp8"] Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.116024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbx8\" (UniqueName: \"kubernetes.io/projected/34af9a3c-a732-4e70-b52a-abc52c108a33-kube-api-access-9hbx8\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.116074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-utilities\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.116101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-catalog-content\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.116486 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-catalog-content\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.117201 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34af9a3c-a732-4e70-b52a-abc52c108a33-utilities\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.135394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbx8\" (UniqueName: \"kubernetes.io/projected/34af9a3c-a732-4e70-b52a-abc52c108a33-kube-api-access-9hbx8\") pod \"certified-operators-77nqw\" (UID: \"34af9a3c-a732-4e70-b52a-abc52c108a33\") " pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.217727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzh7b\" (UniqueName: \"kubernetes.io/projected/f6d87408-264b-44dc-a29c-f1d154ce5b77-kube-api-access-jzh7b\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.217806 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-utilities\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.217879 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-catalog-content\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.219456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.319551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-utilities\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.319599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-catalog-content\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.319696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzh7b\" (UniqueName: \"kubernetes.io/projected/f6d87408-264b-44dc-a29c-f1d154ce5b77-kube-api-access-jzh7b\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.320511 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-utilities\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.320785 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d87408-264b-44dc-a29c-f1d154ce5b77-catalog-content\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.336944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzh7b\" (UniqueName: \"kubernetes.io/projected/f6d87408-264b-44dc-a29c-f1d154ce5b77-kube-api-access-jzh7b\") pod \"community-operators-nzjp8\" (UID: \"f6d87408-264b-44dc-a29c-f1d154ce5b77\") " pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.429774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.436966 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.485808 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:07 crc kubenswrapper[4805]: W0217 00:28:07.492343 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8085f154_1ca6_464b_8648_25c54af9068b.slice/crio-7bb3458d9e70bccb6c01c25109d2bd3a89e7326ffcffd897f660445f1d48ce7b WatchSource:0}: Error finding container 7bb3458d9e70bccb6c01c25109d2bd3a89e7326ffcffd897f660445f1d48ce7b: Status 404 returned error can't find the container with id 7bb3458d9e70bccb6c01c25109d2bd3a89e7326ffcffd897f660445f1d48ce7b Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.670528 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77nqw"] Feb 17 00:28:07 crc kubenswrapper[4805]: W0217 00:28:07.679021 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34af9a3c_a732_4e70_b52a_abc52c108a33.slice/crio-9969a4380e26cc5c9203fa616560539960c67e61020d94adc5daf8a314f8f148 WatchSource:0}: Error finding container 9969a4380e26cc5c9203fa616560539960c67e61020d94adc5daf8a314f8f148: Status 404 returned error can't find the container with id 9969a4380e26cc5c9203fa616560539960c67e61020d94adc5daf8a314f8f148 Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.740914 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nzjp8"] Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.921933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" event={"ID":"de00e548-047f-45c3-b004-e37b89c8b548","Type":"ContainerStarted","Data":"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.921982 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.921997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" event={"ID":"de00e548-047f-45c3-b004-e37b89c8b548","Type":"ContainerStarted","Data":"58a03715f5a27855b968ff139216ddb080da44c664525cba12ca9673e1b204c1"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.924112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" event={"ID":"8085f154-1ca6-464b-8648-25c54af9068b","Type":"ContainerStarted","Data":"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.924165 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" event={"ID":"8085f154-1ca6-464b-8648-25c54af9068b","Type":"ContainerStarted","Data":"7bb3458d9e70bccb6c01c25109d2bd3a89e7326ffcffd897f660445f1d48ce7b"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.924338 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.925315 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzjp8" event={"ID":"f6d87408-264b-44dc-a29c-f1d154ce5b77","Type":"ContainerStarted","Data":"6b4b8edcb25bf62fee469874af58fae73122e99236a9904f6882c7c1cf809d23"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.930026 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.930672 4805 generic.go:334] "Generic (PLEG): container finished" podID="34af9a3c-a732-4e70-b52a-abc52c108a33" containerID="ac63c0dffb90eb6567aa67a3c1e314111556f56d41cb4157c891b6415f548048" exitCode=0 Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.930727 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77nqw" event={"ID":"34af9a3c-a732-4e70-b52a-abc52c108a33","Type":"ContainerDied","Data":"ac63c0dffb90eb6567aa67a3c1e314111556f56d41cb4157c891b6415f548048"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.930782 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77nqw" event={"ID":"34af9a3c-a732-4e70-b52a-abc52c108a33","Type":"ContainerStarted","Data":"9969a4380e26cc5c9203fa616560539960c67e61020d94adc5daf8a314f8f148"} Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.963774 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" podStartSLOduration=2.96375121 podStartE2EDuration="2.96375121s" podCreationTimestamp="2026-02-17 00:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:28:07.960367071 +0000 UTC m=+313.976176479" watchObservedRunningTime="2026-02-17 00:28:07.96375121 +0000 UTC m=+313.979560608" Feb 17 00:28:07 crc kubenswrapper[4805]: I0217 00:28:07.986472 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" podStartSLOduration=2.986453814 podStartE2EDuration="2.986453814s" podCreationTimestamp="2026-02-17 00:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:28:07.984430585 +0000 UTC m=+314.000239993" watchObservedRunningTime="2026-02-17 00:28:07.986453814 +0000 UTC m=+314.002263212" Feb 17 00:28:08 crc kubenswrapper[4805]: I0217 00:28:08.155866 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:08 crc kubenswrapper[4805]: I0217 00:28:08.941503 4805 generic.go:334] "Generic (PLEG): container finished" podID="34af9a3c-a732-4e70-b52a-abc52c108a33" containerID="a8d0170253be14c71731dccf54b05fab8222ca645b573ee30607eae55d5321b9" exitCode=0 Feb 17 00:28:08 crc kubenswrapper[4805]: I0217 00:28:08.941942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77nqw" event={"ID":"34af9a3c-a732-4e70-b52a-abc52c108a33","Type":"ContainerDied","Data":"a8d0170253be14c71731dccf54b05fab8222ca645b573ee30607eae55d5321b9"} Feb 17 00:28:08 crc kubenswrapper[4805]: I0217 00:28:08.944743 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6d87408-264b-44dc-a29c-f1d154ce5b77" containerID="9423d6eed9d916adeeef24666a119f60e1ec78143bf1a7a11bf7a6235c3290a8" exitCode=0 Feb 17 00:28:08 crc kubenswrapper[4805]: I0217 00:28:08.944895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzjp8" event={"ID":"f6d87408-264b-44dc-a29c-f1d154ce5b77","Type":"ContainerDied","Data":"9423d6eed9d916adeeef24666a119f60e1ec78143bf1a7a11bf7a6235c3290a8"} Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.293303 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2k2"] Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.295132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.297620 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.313839 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2k2"] Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.345705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-utilities\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.345852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbx7h\" (UniqueName: \"kubernetes.io/projected/08eb41f7-10d2-42b7-b96a-998cd213dfe1-kube-api-access-sbx7h\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.345907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-catalog-content\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.446698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-utilities\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.446788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbx7h\" (UniqueName: \"kubernetes.io/projected/08eb41f7-10d2-42b7-b96a-998cd213dfe1-kube-api-access-sbx7h\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.446823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-catalog-content\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.447267 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-catalog-content\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.447556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08eb41f7-10d2-42b7-b96a-998cd213dfe1-utilities\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.476570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbx7h\" (UniqueName: \"kubernetes.io/projected/08eb41f7-10d2-42b7-b96a-998cd213dfe1-kube-api-access-sbx7h\") pod \"redhat-marketplace-vw2k2\" (UID: \"08eb41f7-10d2-42b7-b96a-998cd213dfe1\") " pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.613776 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.687136 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbczx"] Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.688579 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.690886 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.702262 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbczx"] Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.750115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-utilities\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.750193 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-catalog-content\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.750558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gv9c\" (UniqueName: \"kubernetes.io/projected/a5c9f438-05f1-4087-a87b-07d2db71c1e0-kube-api-access-4gv9c\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.852019 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gv9c\" (UniqueName: \"kubernetes.io/projected/a5c9f438-05f1-4087-a87b-07d2db71c1e0-kube-api-access-4gv9c\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.852099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-utilities\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.852157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-catalog-content\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.856268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-catalog-content\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.856421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5c9f438-05f1-4087-a87b-07d2db71c1e0-utilities\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.861397 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vw2k2"] Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.880147 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gv9c\" (UniqueName: \"kubernetes.io/projected/a5c9f438-05f1-4087-a87b-07d2db71c1e0-kube-api-access-4gv9c\") pod \"redhat-operators-xbczx\" (UID: \"a5c9f438-05f1-4087-a87b-07d2db71c1e0\") " pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.950051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77nqw" event={"ID":"34af9a3c-a732-4e70-b52a-abc52c108a33","Type":"ContainerStarted","Data":"4bbbec49c8d1b12d00f50b160b608c87c07e2dffed6a4fd4f2455cedf6033009"} Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.951200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2k2" event={"ID":"08eb41f7-10d2-42b7-b96a-998cd213dfe1","Type":"ContainerStarted","Data":"ba4d7418ff0bdcf226618299b9bb59543b08ff02de71604f6ec7b444cd416004"} Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.953396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzjp8" event={"ID":"f6d87408-264b-44dc-a29c-f1d154ce5b77","Type":"ContainerStarted","Data":"8883c180322184606f083df59c11f946af0ba3993475fb0e2186e3fbc4ff10f4"} Feb 17 00:28:09 crc kubenswrapper[4805]: I0217 00:28:09.968502 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77nqw" podStartSLOduration=2.580828483 podStartE2EDuration="3.968486431s" podCreationTimestamp="2026-02-17 00:28:06 +0000 UTC" firstStartedPulling="2026-02-17 00:28:07.933928037 +0000 UTC m=+313.949737435" lastFinishedPulling="2026-02-17 00:28:09.321585955 +0000 UTC m=+315.337395383" observedRunningTime="2026-02-17 00:28:09.967411089 +0000 UTC m=+315.983220487" watchObservedRunningTime="2026-02-17 00:28:09.968486431 +0000 UTC m=+315.984295829" Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.026662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.441895 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbczx"] Feb 17 00:28:10 crc kubenswrapper[4805]: W0217 00:28:10.454872 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5c9f438_05f1_4087_a87b_07d2db71c1e0.slice/crio-40dbef354ec3090207db306b4570f6f7c2d66fbfe20ec8affa44689bf3d2248e WatchSource:0}: Error finding container 40dbef354ec3090207db306b4570f6f7c2d66fbfe20ec8affa44689bf3d2248e: Status 404 returned error can't find the container with id 40dbef354ec3090207db306b4570f6f7c2d66fbfe20ec8affa44689bf3d2248e Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.959361 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6d87408-264b-44dc-a29c-f1d154ce5b77" containerID="8883c180322184606f083df59c11f946af0ba3993475fb0e2186e3fbc4ff10f4" exitCode=0 Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.959430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzjp8" event={"ID":"f6d87408-264b-44dc-a29c-f1d154ce5b77","Type":"ContainerDied","Data":"8883c180322184606f083df59c11f946af0ba3993475fb0e2186e3fbc4ff10f4"} Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.962241 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5c9f438-05f1-4087-a87b-07d2db71c1e0" containerID="70940e0cf404fe4b8186a7e98c458b41805bd66051fb656d6e04deb368ba2a32" exitCode=0 Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.962278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbczx" event={"ID":"a5c9f438-05f1-4087-a87b-07d2db71c1e0","Type":"ContainerDied","Data":"70940e0cf404fe4b8186a7e98c458b41805bd66051fb656d6e04deb368ba2a32"} Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.962349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbczx" event={"ID":"a5c9f438-05f1-4087-a87b-07d2db71c1e0","Type":"ContainerStarted","Data":"40dbef354ec3090207db306b4570f6f7c2d66fbfe20ec8affa44689bf3d2248e"} Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.963862 4805 generic.go:334] "Generic (PLEG): container finished" podID="08eb41f7-10d2-42b7-b96a-998cd213dfe1" containerID="4a476b53f542616ca4962260dfc9b281e2306d6d3811a5fc5df1b6b8f5e90cb3" exitCode=0 Feb 17 00:28:10 crc kubenswrapper[4805]: I0217 00:28:10.964966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2k2" event={"ID":"08eb41f7-10d2-42b7-b96a-998cd213dfe1","Type":"ContainerDied","Data":"4a476b53f542616ca4962260dfc9b281e2306d6d3811a5fc5df1b6b8f5e90cb3"} Feb 17 00:28:11 crc kubenswrapper[4805]: I0217 00:28:11.971731 4805 generic.go:334] "Generic (PLEG): container finished" podID="08eb41f7-10d2-42b7-b96a-998cd213dfe1" containerID="87cb9dc4aef3cc4da7e4a91f008e42a2638d45c177d6f5973ec53204bb708218" exitCode=0 Feb 17 00:28:11 crc kubenswrapper[4805]: I0217 00:28:11.971779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2k2" event={"ID":"08eb41f7-10d2-42b7-b96a-998cd213dfe1","Type":"ContainerDied","Data":"87cb9dc4aef3cc4da7e4a91f008e42a2638d45c177d6f5973ec53204bb708218"} Feb 17 00:28:11 crc kubenswrapper[4805]: I0217 00:28:11.975053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nzjp8" event={"ID":"f6d87408-264b-44dc-a29c-f1d154ce5b77","Type":"ContainerStarted","Data":"3b356c2c2219052a03bdd51195b2a9ee649549868a42ab2563071b479ee83bfc"} Feb 17 00:28:11 crc kubenswrapper[4805]: I0217 00:28:11.976844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbczx" event={"ID":"a5c9f438-05f1-4087-a87b-07d2db71c1e0","Type":"ContainerStarted","Data":"61b82d23e056a5741987930cc3c0cf9ad7b9a5017c65c602b569aeb7fe16d039"} Feb 17 00:28:12 crc kubenswrapper[4805]: I0217 00:28:12.026808 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nzjp8" podStartSLOduration=2.64509234 podStartE2EDuration="5.026788768s" podCreationTimestamp="2026-02-17 00:28:07 +0000 UTC" firstStartedPulling="2026-02-17 00:28:08.946201383 +0000 UTC m=+314.962010791" lastFinishedPulling="2026-02-17 00:28:11.327897831 +0000 UTC m=+317.343707219" observedRunningTime="2026-02-17 00:28:12.025700806 +0000 UTC m=+318.041510224" watchObservedRunningTime="2026-02-17 00:28:12.026788768 +0000 UTC m=+318.042598166" Feb 17 00:28:12 crc kubenswrapper[4805]: I0217 00:28:12.984765 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5c9f438-05f1-4087-a87b-07d2db71c1e0" containerID="61b82d23e056a5741987930cc3c0cf9ad7b9a5017c65c602b569aeb7fe16d039" exitCode=0 Feb 17 00:28:12 crc kubenswrapper[4805]: I0217 00:28:12.984907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbczx" event={"ID":"a5c9f438-05f1-4087-a87b-07d2db71c1e0","Type":"ContainerDied","Data":"61b82d23e056a5741987930cc3c0cf9ad7b9a5017c65c602b569aeb7fe16d039"} Feb 17 00:28:12 crc kubenswrapper[4805]: I0217 00:28:12.988502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vw2k2" event={"ID":"08eb41f7-10d2-42b7-b96a-998cd213dfe1","Type":"ContainerStarted","Data":"00179656cf1ea9daf2bf9b9f37ad3884d594de1a67041b8556d46c5585042616"} Feb 17 00:28:13 crc kubenswrapper[4805]: I0217 00:28:13.018605 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vw2k2" podStartSLOduration=2.565017768 podStartE2EDuration="4.018590434s" podCreationTimestamp="2026-02-17 00:28:09 +0000 UTC" firstStartedPulling="2026-02-17 00:28:10.96555899 +0000 UTC m=+316.981368408" lastFinishedPulling="2026-02-17 00:28:12.419131686 +0000 UTC m=+318.434941074" observedRunningTime="2026-02-17 00:28:13.018214393 +0000 UTC m=+319.034023801" watchObservedRunningTime="2026-02-17 00:28:13.018590434 +0000 UTC m=+319.034399822" Feb 17 00:28:13 crc kubenswrapper[4805]: I0217 00:28:13.996022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbczx" event={"ID":"a5c9f438-05f1-4087-a87b-07d2db71c1e0","Type":"ContainerStarted","Data":"8a586f18b9d1522e1033e5d39868aaffa316b508540d2320dd8f36f8e2b90678"} Feb 17 00:28:14 crc kubenswrapper[4805]: I0217 00:28:14.021853 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbczx" podStartSLOduration=2.6189761259999997 podStartE2EDuration="5.021837124s" podCreationTimestamp="2026-02-17 00:28:09 +0000 UTC" firstStartedPulling="2026-02-17 00:28:10.963229672 +0000 UTC m=+316.979039060" lastFinishedPulling="2026-02-17 00:28:13.36609065 +0000 UTC m=+319.381900058" observedRunningTime="2026-02-17 00:28:14.017618511 +0000 UTC m=+320.033427919" watchObservedRunningTime="2026-02-17 00:28:14.021837124 +0000 UTC m=+320.037646522" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.220180 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.221923 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.261152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.425335 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.425677 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" podUID="de00e548-047f-45c3-b004-e37b89c8b548" containerName="route-controller-manager" containerID="cri-o://89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7" gracePeriod=30 Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.431768 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.431935 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:17 crc kubenswrapper[4805]: I0217 00:28:17.471631 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:18 crc kubenswrapper[4805]: I0217 00:28:18.059996 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nzjp8" Feb 17 00:28:18 crc kubenswrapper[4805]: I0217 00:28:18.072893 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77nqw" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.613939 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.615063 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.658530 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.824186 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.850986 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh"] Feb 17 00:28:19 crc kubenswrapper[4805]: E0217 00:28:19.851245 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de00e548-047f-45c3-b004-e37b89c8b548" containerName="route-controller-manager" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.851261 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de00e548-047f-45c3-b004-e37b89c8b548" containerName="route-controller-manager" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.851379 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de00e548-047f-45c3-b004-e37b89c8b548" containerName="route-controller-manager" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.852098 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.864810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh"] Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.915720 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config\") pod \"de00e548-047f-45c3-b004-e37b89c8b548\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert\") pod \"de00e548-047f-45c3-b004-e37b89c8b548\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca\") pod \"de00e548-047f-45c3-b004-e37b89c8b548\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wst8w\" (UniqueName: \"kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w\") pod \"de00e548-047f-45c3-b004-e37b89c8b548\" (UID: \"de00e548-047f-45c3-b004-e37b89c8b548\") " Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-serving-cert\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916372 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-client-ca\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-config\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbdb\" (UniqueName: \"kubernetes.io/projected/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-kube-api-access-tbbdb\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916747 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca" (OuterVolumeSpecName: "client-ca") pod "de00e548-047f-45c3-b004-e37b89c8b548" (UID: "de00e548-047f-45c3-b004-e37b89c8b548"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.916819 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config" (OuterVolumeSpecName: "config") pod "de00e548-047f-45c3-b004-e37b89c8b548" (UID: "de00e548-047f-45c3-b004-e37b89c8b548"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.920722 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w" (OuterVolumeSpecName: "kube-api-access-wst8w") pod "de00e548-047f-45c3-b004-e37b89c8b548" (UID: "de00e548-047f-45c3-b004-e37b89c8b548"). InnerVolumeSpecName "kube-api-access-wst8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:28:19 crc kubenswrapper[4805]: I0217 00:28:19.923867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de00e548-047f-45c3-b004-e37b89c8b548" (UID: "de00e548-047f-45c3-b004-e37b89c8b548"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-config\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbdb\" (UniqueName: \"kubernetes.io/projected/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-kube-api-access-tbbdb\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-serving-cert\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-client-ca\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017581 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017602 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wst8w\" (UniqueName: \"kubernetes.io/projected/de00e548-047f-45c3-b004-e37b89c8b548-kube-api-access-wst8w\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017624 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de00e548-047f-45c3-b004-e37b89c8b548-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.017641 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de00e548-047f-45c3-b004-e37b89c8b548-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.018805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-config\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.018966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-client-ca\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.022228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-serving-cert\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.027829 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.027896 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.030542 4805 generic.go:334] "Generic (PLEG): container finished" podID="de00e548-047f-45c3-b004-e37b89c8b548" containerID="89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7" exitCode=0 Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.030574 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.030587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" event={"ID":"de00e548-047f-45c3-b004-e37b89c8b548","Type":"ContainerDied","Data":"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7"} Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.030678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld" event={"ID":"de00e548-047f-45c3-b004-e37b89c8b548","Type":"ContainerDied","Data":"58a03715f5a27855b968ff139216ddb080da44c664525cba12ca9673e1b204c1"} Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.030851 4805 scope.go:117] "RemoveContainer" containerID="89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.045251 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbdb\" (UniqueName: \"kubernetes.io/projected/5ffa953f-5c19-43cb-a295-bfcdeb8c2cff-kube-api-access-tbbdb\") pod \"route-controller-manager-68dc988464-fswhh\" (UID: \"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff\") " pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.073760 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.085436 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.088436 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb867f77-8tzld"] Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.089227 4805 scope.go:117] "RemoveContainer" containerID="89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.089692 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vw2k2" Feb 17 00:28:20 crc kubenswrapper[4805]: E0217 00:28:20.089725 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7\": container with ID starting with 89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7 not found: ID does not exist" containerID="89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.089756 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7"} err="failed to get container status \"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7\": rpc error: code = NotFound desc = could not find container \"89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7\": container with ID starting with 89fa71b9e09ad7054bafec808c88fcce2b0de64e6ee6e447a9922561706e3ea7 not found: ID does not exist" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.173863 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.612360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh"] Feb 17 00:28:20 crc kubenswrapper[4805]: W0217 00:28:20.620191 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ffa953f_5c19_43cb_a295_bfcdeb8c2cff.slice/crio-3693c295ddaad6fe0a745b9548113b4ce0b2eaf8ea8719cdb0cdf940e7e11027 WatchSource:0}: Error finding container 3693c295ddaad6fe0a745b9548113b4ce0b2eaf8ea8719cdb0cdf940e7e11027: Status 404 returned error can't find the container with id 3693c295ddaad6fe0a745b9548113b4ce0b2eaf8ea8719cdb0cdf940e7e11027 Feb 17 00:28:20 crc kubenswrapper[4805]: I0217 00:28:20.799130 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de00e548-047f-45c3-b004-e37b89c8b548" path="/var/lib/kubelet/pods/de00e548-047f-45c3-b004-e37b89c8b548/volumes" Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.036216 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" event={"ID":"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff","Type":"ContainerStarted","Data":"e8314fc0b5346322030a4af1ee0479363217d2a001c4a187dbe3e775a46235eb"} Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.036265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" event={"ID":"5ffa953f-5c19-43cb-a295-bfcdeb8c2cff","Type":"ContainerStarted","Data":"3693c295ddaad6fe0a745b9548113b4ce0b2eaf8ea8719cdb0cdf940e7e11027"} Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.036472 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.089651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbczx" Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.108865 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" podStartSLOduration=4.108847904 podStartE2EDuration="4.108847904s" podCreationTimestamp="2026-02-17 00:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:28:21.057864973 +0000 UTC m=+327.073674381" watchObservedRunningTime="2026-02-17 00:28:21.108847904 +0000 UTC m=+327.124657312" Feb 17 00:28:21 crc kubenswrapper[4805]: I0217 00:28:21.207689 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68dc988464-fswhh" Feb 17 00:28:23 crc kubenswrapper[4805]: I0217 00:28:23.077493 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:28:23 crc kubenswrapper[4805]: I0217 00:28:23.077592 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.469779 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7qvt"] Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.471141 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.505471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7qvt"] Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbhw\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-kube-api-access-pjbhw\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/646207f9-5c1f-4121-9528-72d7ccf381dc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/646207f9-5c1f-4121-9528-72d7ccf381dc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533785 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-bound-sa-token\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533918 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-tls\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.533968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-trusted-ca\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.534005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-certificates\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.567865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.635722 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-tls\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.635831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-trusted-ca\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.635873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-certificates\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.635932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbhw\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-kube-api-access-pjbhw\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.635983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/646207f9-5c1f-4121-9528-72d7ccf381dc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.636030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/646207f9-5c1f-4121-9528-72d7ccf381dc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.636062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-bound-sa-token\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.637025 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/646207f9-5c1f-4121-9528-72d7ccf381dc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.637789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-trusted-ca\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.638139 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-certificates\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.653118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-registry-tls\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.653454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/646207f9-5c1f-4121-9528-72d7ccf381dc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.658234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-bound-sa-token\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.658681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbhw\" (UniqueName: \"kubernetes.io/projected/646207f9-5c1f-4121-9528-72d7ccf381dc-kube-api-access-pjbhw\") pod \"image-registry-66df7c8f76-g7qvt\" (UID: \"646207f9-5c1f-4121-9528-72d7ccf381dc\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:28 crc kubenswrapper[4805]: I0217 00:28:28.791478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:29 crc kubenswrapper[4805]: I0217 00:28:29.300728 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7qvt"] Feb 17 00:28:29 crc kubenswrapper[4805]: W0217 00:28:29.312443 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod646207f9_5c1f_4121_9528_72d7ccf381dc.slice/crio-b63d9879f0d1c5abd9b19199120b7cdb890384969157df746fb3d6723902c646 WatchSource:0}: Error finding container b63d9879f0d1c5abd9b19199120b7cdb890384969157df746fb3d6723902c646: Status 404 returned error can't find the container with id b63d9879f0d1c5abd9b19199120b7cdb890384969157df746fb3d6723902c646 Feb 17 00:28:30 crc kubenswrapper[4805]: I0217 00:28:30.106275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" event={"ID":"646207f9-5c1f-4121-9528-72d7ccf381dc","Type":"ContainerStarted","Data":"a0919698c21cfc120a16542effa2a8ad8a486f92d7e03d80f962259df21e41b9"} Feb 17 00:28:30 crc kubenswrapper[4805]: I0217 00:28:30.107783 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:30 crc kubenswrapper[4805]: I0217 00:28:30.107827 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" event={"ID":"646207f9-5c1f-4121-9528-72d7ccf381dc","Type":"ContainerStarted","Data":"b63d9879f0d1c5abd9b19199120b7cdb890384969157df746fb3d6723902c646"} Feb 17 00:28:30 crc kubenswrapper[4805]: I0217 00:28:30.134807 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" podStartSLOduration=2.134782752 podStartE2EDuration="2.134782752s" podCreationTimestamp="2026-02-17 00:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:28:30.129386326 +0000 UTC m=+336.145195784" watchObservedRunningTime="2026-02-17 00:28:30.134782752 +0000 UTC m=+336.150592190" Feb 17 00:28:37 crc kubenswrapper[4805]: I0217 00:28:37.474919 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:37 crc kubenswrapper[4805]: I0217 00:28:37.477728 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" podUID="8085f154-1ca6-464b-8648-25c54af9068b" containerName="controller-manager" containerID="cri-o://d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677" gracePeriod=30 Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.001199 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.160535 4805 generic.go:334] "Generic (PLEG): container finished" podID="8085f154-1ca6-464b-8648-25c54af9068b" containerID="d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677" exitCode=0 Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.160612 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.160632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" event={"ID":"8085f154-1ca6-464b-8648-25c54af9068b","Type":"ContainerDied","Data":"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677"} Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.160684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb" event={"ID":"8085f154-1ca6-464b-8648-25c54af9068b","Type":"ContainerDied","Data":"7bb3458d9e70bccb6c01c25109d2bd3a89e7326ffcffd897f660445f1d48ce7b"} Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.160712 4805 scope.go:117] "RemoveContainer" containerID="d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.181903 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca\") pod \"8085f154-1ca6-464b-8648-25c54af9068b\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.182045 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert\") pod \"8085f154-1ca6-464b-8648-25c54af9068b\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.182115 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config\") pod \"8085f154-1ca6-464b-8648-25c54af9068b\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.182209 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdwht\" (UniqueName: \"kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht\") pod \"8085f154-1ca6-464b-8648-25c54af9068b\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.182395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles\") pod \"8085f154-1ca6-464b-8648-25c54af9068b\" (UID: \"8085f154-1ca6-464b-8648-25c54af9068b\") " Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.183227 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8085f154-1ca6-464b-8648-25c54af9068b" (UID: "8085f154-1ca6-464b-8648-25c54af9068b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.183395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8085f154-1ca6-464b-8648-25c54af9068b" (UID: "8085f154-1ca6-464b-8648-25c54af9068b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.183497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config" (OuterVolumeSpecName: "config") pod "8085f154-1ca6-464b-8648-25c54af9068b" (UID: "8085f154-1ca6-464b-8648-25c54af9068b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.184881 4805 scope.go:117] "RemoveContainer" containerID="d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677" Feb 17 00:28:38 crc kubenswrapper[4805]: E0217 00:28:38.185801 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677\": container with ID starting with d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677 not found: ID does not exist" containerID="d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.185878 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677"} err="failed to get container status \"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677\": rpc error: code = NotFound desc = could not find container \"d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677\": container with ID starting with d786d4d8f95fe688705dfa79ad9be2d19bf69561b62c1726f87d6a950e9ec677 not found: ID does not exist" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.190060 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8085f154-1ca6-464b-8648-25c54af9068b" (UID: "8085f154-1ca6-464b-8648-25c54af9068b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.191512 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht" (OuterVolumeSpecName: "kube-api-access-pdwht") pod "8085f154-1ca6-464b-8648-25c54af9068b" (UID: "8085f154-1ca6-464b-8648-25c54af9068b"). InnerVolumeSpecName "kube-api-access-pdwht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.283788 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.283842 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.283860 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8085f154-1ca6-464b-8648-25c54af9068b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.283878 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8085f154-1ca6-464b-8648-25c54af9068b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.283896 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdwht\" (UniqueName: \"kubernetes.io/projected/8085f154-1ca6-464b-8648-25c54af9068b-kube-api-access-pdwht\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.511093 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.518197 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69fc8cbf9c-h7cqb"] Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.697498 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-847f48b45b-brsrm"] Feb 17 00:28:38 crc kubenswrapper[4805]: E0217 00:28:38.697893 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8085f154-1ca6-464b-8648-25c54af9068b" containerName="controller-manager" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.697933 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8085f154-1ca6-464b-8648-25c54af9068b" containerName="controller-manager" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.698144 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8085f154-1ca6-464b-8648-25c54af9068b" containerName="controller-manager" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.699009 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.707910 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.707947 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.708190 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.708321 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.708653 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.708919 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.717685 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-847f48b45b-brsrm"] Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.718895 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.791167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d69de2-b7e6-4b3e-a993-39fcf396e889-serving-cert\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.791717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-client-ca\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.791949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-proxy-ca-bundles\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.792105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svb54\" (UniqueName: \"kubernetes.io/projected/72d69de2-b7e6-4b3e-a993-39fcf396e889-kube-api-access-svb54\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.792598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-config\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.793920 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8085f154-1ca6-464b-8648-25c54af9068b" path="/var/lib/kubelet/pods/8085f154-1ca6-464b-8648-25c54af9068b/volumes" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.893569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d69de2-b7e6-4b3e-a993-39fcf396e889-serving-cert\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.893671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-client-ca\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.893742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-proxy-ca-bundles\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.893774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svb54\" (UniqueName: \"kubernetes.io/projected/72d69de2-b7e6-4b3e-a993-39fcf396e889-kube-api-access-svb54\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.893895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-config\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.895963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-client-ca\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.896862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-config\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.897955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72d69de2-b7e6-4b3e-a993-39fcf396e889-proxy-ca-bundles\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.906863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72d69de2-b7e6-4b3e-a993-39fcf396e889-serving-cert\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:38 crc kubenswrapper[4805]: I0217 00:28:38.924738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svb54\" (UniqueName: \"kubernetes.io/projected/72d69de2-b7e6-4b3e-a993-39fcf396e889-kube-api-access-svb54\") pod \"controller-manager-847f48b45b-brsrm\" (UID: \"72d69de2-b7e6-4b3e-a993-39fcf396e889\") " pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:39 crc kubenswrapper[4805]: I0217 00:28:39.070873 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:39 crc kubenswrapper[4805]: I0217 00:28:39.523696 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-847f48b45b-brsrm"] Feb 17 00:28:40 crc kubenswrapper[4805]: I0217 00:28:40.179305 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" event={"ID":"72d69de2-b7e6-4b3e-a993-39fcf396e889","Type":"ContainerStarted","Data":"74fd39413b19e1b9877def4a923db507033756e544ab11f353ebf3aea2e51828"} Feb 17 00:28:40 crc kubenswrapper[4805]: I0217 00:28:40.179582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" event={"ID":"72d69de2-b7e6-4b3e-a993-39fcf396e889","Type":"ContainerStarted","Data":"55705fbc3ef351e74a899c8e840b46cad45057a9a24cb9d3d14f564961851b91"} Feb 17 00:28:40 crc kubenswrapper[4805]: I0217 00:28:40.179747 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:40 crc kubenswrapper[4805]: I0217 00:28:40.187786 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" Feb 17 00:28:40 crc kubenswrapper[4805]: I0217 00:28:40.207940 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-847f48b45b-brsrm" podStartSLOduration=3.2079141780000002 podStartE2EDuration="3.207914178s" podCreationTimestamp="2026-02-17 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:28:40.203977594 +0000 UTC m=+346.219786992" watchObservedRunningTime="2026-02-17 00:28:40.207914178 +0000 UTC m=+346.223723606" Feb 17 00:28:48 crc kubenswrapper[4805]: I0217 00:28:48.797511 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g7qvt" Feb 17 00:28:48 crc kubenswrapper[4805]: I0217 00:28:48.873465 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:28:53 crc kubenswrapper[4805]: I0217 00:28:53.077815 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:28:53 crc kubenswrapper[4805]: I0217 00:28:53.078216 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:29:13 crc kubenswrapper[4805]: I0217 00:29:13.942540 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" podUID="c367e959-10fb-43d9-baf3-31123c06738b" containerName="registry" containerID="cri-o://73f8a906e01d4c190fc76468d8aa9cbcaf34b352f9715bbe5f7dc6c68a157ea1" gracePeriod=30 Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.397959 4805 generic.go:334] "Generic (PLEG): container finished" podID="c367e959-10fb-43d9-baf3-31123c06738b" containerID="73f8a906e01d4c190fc76468d8aa9cbcaf34b352f9715bbe5f7dc6c68a157ea1" exitCode=0 Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.398036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" event={"ID":"c367e959-10fb-43d9-baf3-31123c06738b","Type":"ContainerDied","Data":"73f8a906e01d4c190fc76468d8aa9cbcaf34b352f9715bbe5f7dc6c68a157ea1"} Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.398125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" event={"ID":"c367e959-10fb-43d9-baf3-31123c06738b","Type":"ContainerDied","Data":"4a4c90b0fa55868d4369febd6d6527a62ef9a3961c11acffdb26edfb2d206550"} Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.398150 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4c90b0fa55868d4369febd6d6527a62ef9a3961c11acffdb26edfb2d206550" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.431099 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.520734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.520820 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn88w\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.520891 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.520950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.521042 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.521078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.521116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.521151 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls\") pod \"c367e959-10fb-43d9-baf3-31123c06738b\" (UID: \"c367e959-10fb-43d9-baf3-31123c06738b\") " Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.522487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.522955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.527575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.528139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.531533 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.535058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w" (OuterVolumeSpecName: "kube-api-access-pn88w") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "kube-api-access-pn88w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.536753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.536857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c367e959-10fb-43d9-baf3-31123c06738b" (UID: "c367e959-10fb-43d9-baf3-31123c06738b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623356 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn88w\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-kube-api-access-pn88w\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623434 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623458 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c367e959-10fb-43d9-baf3-31123c06738b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623475 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623493 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c367e959-10fb-43d9-baf3-31123c06738b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623509 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c367e959-10fb-43d9-baf3-31123c06738b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:14 crc kubenswrapper[4805]: I0217 00:29:14.623524 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c367e959-10fb-43d9-baf3-31123c06738b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:15 crc kubenswrapper[4805]: I0217 00:29:15.404386 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-s576k" Feb 17 00:29:15 crc kubenswrapper[4805]: I0217 00:29:15.423955 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:29:15 crc kubenswrapper[4805]: I0217 00:29:15.430582 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-s576k"] Feb 17 00:29:16 crc kubenswrapper[4805]: I0217 00:29:16.818666 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c367e959-10fb-43d9-baf3-31123c06738b" path="/var/lib/kubelet/pods/c367e959-10fb-43d9-baf3-31123c06738b/volumes" Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.077568 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.078158 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.078204 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.078896 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.078961 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961" gracePeriod=600 Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.461256 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961" exitCode=0 Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.461396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961"} Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.461673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec"} Feb 17 00:29:23 crc kubenswrapper[4805]: I0217 00:29:23.461703 4805 scope.go:117] "RemoveContainer" containerID="da5a5d619fcbaaa90a0311377d41ed630f335dc9a9f4732b0dd4efc109f88287" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.206254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49"] Feb 17 00:30:00 crc kubenswrapper[4805]: E0217 00:30:00.207094 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c367e959-10fb-43d9-baf3-31123c06738b" containerName="registry" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.207115 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c367e959-10fb-43d9-baf3-31123c06738b" containerName="registry" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.207264 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c367e959-10fb-43d9-baf3-31123c06738b" containerName="registry" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.207718 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.213925 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.214121 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49"] Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.214530 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.323427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.323591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m5h9\" (UniqueName: \"kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.323665 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.424782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.424890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m5h9\" (UniqueName: \"kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.424945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.429084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.433270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.452969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m5h9\" (UniqueName: \"kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9\") pod \"collect-profiles-29521470-82z49\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:00 crc kubenswrapper[4805]: I0217 00:30:00.531584 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:01 crc kubenswrapper[4805]: I0217 00:30:01.006891 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49"] Feb 17 00:30:01 crc kubenswrapper[4805]: I0217 00:30:01.767604 4805 generic.go:334] "Generic (PLEG): container finished" podID="7121f994-a6dc-4821-9f9b-f21ef4e212fe" containerID="7de9ee56286f6e03b2db57118e5510929b543d1e2598155979bfc52d5571a49b" exitCode=0 Feb 17 00:30:01 crc kubenswrapper[4805]: I0217 00:30:01.768038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" event={"ID":"7121f994-a6dc-4821-9f9b-f21ef4e212fe","Type":"ContainerDied","Data":"7de9ee56286f6e03b2db57118e5510929b543d1e2598155979bfc52d5571a49b"} Feb 17 00:30:01 crc kubenswrapper[4805]: I0217 00:30:01.768106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" event={"ID":"7121f994-a6dc-4821-9f9b-f21ef4e212fe","Type":"ContainerStarted","Data":"58bb46ddb22878d0bc722d5a41aebb952b736c997c0592951aebe313b4920249"} Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.158765 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.271403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume\") pod \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.271511 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m5h9\" (UniqueName: \"kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9\") pod \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.271607 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume\") pod \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\" (UID: \"7121f994-a6dc-4821-9f9b-f21ef4e212fe\") " Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.272727 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "7121f994-a6dc-4821-9f9b-f21ef4e212fe" (UID: "7121f994-a6dc-4821-9f9b-f21ef4e212fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.277703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7121f994-a6dc-4821-9f9b-f21ef4e212fe" (UID: "7121f994-a6dc-4821-9f9b-f21ef4e212fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.284759 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9" (OuterVolumeSpecName: "kube-api-access-2m5h9") pod "7121f994-a6dc-4821-9f9b-f21ef4e212fe" (UID: "7121f994-a6dc-4821-9f9b-f21ef4e212fe"). InnerVolumeSpecName "kube-api-access-2m5h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.373190 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7121f994-a6dc-4821-9f9b-f21ef4e212fe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.373242 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7121f994-a6dc-4821-9f9b-f21ef4e212fe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.373262 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m5h9\" (UniqueName: \"kubernetes.io/projected/7121f994-a6dc-4821-9f9b-f21ef4e212fe-kube-api-access-2m5h9\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.785474 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" event={"ID":"7121f994-a6dc-4821-9f9b-f21ef4e212fe","Type":"ContainerDied","Data":"58bb46ddb22878d0bc722d5a41aebb952b736c997c0592951aebe313b4920249"} Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.785532 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58bb46ddb22878d0bc722d5a41aebb952b736c997c0592951aebe313b4920249" Feb 17 00:30:03 crc kubenswrapper[4805]: I0217 00:30:03.785544 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49" Feb 17 00:31:23 crc kubenswrapper[4805]: I0217 00:31:23.077161 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:31:23 crc kubenswrapper[4805]: I0217 00:31:23.077867 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:31:53 crc kubenswrapper[4805]: I0217 00:31:53.077890 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:31:53 crc kubenswrapper[4805]: I0217 00:31:53.078682 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:31:55 crc kubenswrapper[4805]: I0217 00:31:55.040525 4805 scope.go:117] "RemoveContainer" containerID="fa3b8dd7ee746544b8ddb24535751b6222d777665a0fed7db62a887380526fa6" Feb 17 00:31:55 crc kubenswrapper[4805]: I0217 00:31:55.071547 4805 scope.go:117] "RemoveContainer" containerID="73f8a906e01d4c190fc76468d8aa9cbcaf34b352f9715bbe5f7dc6c68a157ea1" Feb 17 00:31:55 crc kubenswrapper[4805]: I0217 00:31:55.102491 4805 scope.go:117] "RemoveContainer" containerID="fa390b0d307a68d5bcaa7b8c1f963e1e2b3d668631e63931c3cafa4d379e5eae" Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.077205 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.078011 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.078097 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.079015 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.079141 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec" gracePeriod=600 Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.760193 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec" exitCode=0 Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.760253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec"} Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.760663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471"} Feb 17 00:32:23 crc kubenswrapper[4805]: I0217 00:32:23.760703 4805 scope.go:117] "RemoveContainer" containerID="bff5edca2c2cd9c3a1645d8c15227ed2d3c87621069f2931407d8d9904051961" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.097961 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs"] Feb 17 00:32:42 crc kubenswrapper[4805]: E0217 00:32:42.099406 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7121f994-a6dc-4821-9f9b-f21ef4e212fe" containerName="collect-profiles" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.099428 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7121f994-a6dc-4821-9f9b-f21ef4e212fe" containerName="collect-profiles" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.099618 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7121f994-a6dc-4821-9f9b-f21ef4e212fe" containerName="collect-profiles" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.100819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.108188 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.113810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs"] Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.132218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.132291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.132361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck86c\" (UniqueName: \"kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.233664 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.233779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.233839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck86c\" (UniqueName: \"kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.234461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.234934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.274492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck86c\" (UniqueName: \"kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.431543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.772047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs"] Feb 17 00:32:42 crc kubenswrapper[4805]: I0217 00:32:42.893245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" event={"ID":"35452466-502c-40f8-8b96-bf5ba6de3a8a","Type":"ContainerStarted","Data":"6d92f6ac85a9266f8a165beaab90642497969ac4c8a7d2d72859543fb5d2b1d8"} Feb 17 00:32:43 crc kubenswrapper[4805]: I0217 00:32:43.903646 4805 generic.go:334] "Generic (PLEG): container finished" podID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerID="2c475eb0bce2299ec32898da2f1ee98d0e28c32a26007221af3f7b83d3d4b8fb" exitCode=0 Feb 17 00:32:43 crc kubenswrapper[4805]: I0217 00:32:43.903728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" event={"ID":"35452466-502c-40f8-8b96-bf5ba6de3a8a","Type":"ContainerDied","Data":"2c475eb0bce2299ec32898da2f1ee98d0e28c32a26007221af3f7b83d3d4b8fb"} Feb 17 00:32:43 crc kubenswrapper[4805]: I0217 00:32:43.906044 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:32:45 crc kubenswrapper[4805]: I0217 00:32:45.922936 4805 generic.go:334] "Generic (PLEG): container finished" podID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerID="cb0a72de2a258b387d9af6debe0055ab970a3ded1bf9b62e49372e55dc14416d" exitCode=0 Feb 17 00:32:45 crc kubenswrapper[4805]: I0217 00:32:45.922990 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" event={"ID":"35452466-502c-40f8-8b96-bf5ba6de3a8a","Type":"ContainerDied","Data":"cb0a72de2a258b387d9af6debe0055ab970a3ded1bf9b62e49372e55dc14416d"} Feb 17 00:32:46 crc kubenswrapper[4805]: I0217 00:32:46.933606 4805 generic.go:334] "Generic (PLEG): container finished" podID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerID="4fb4e15febdfc3d442522b25142e816799c448c2018804329e7a02760f50846a" exitCode=0 Feb 17 00:32:46 crc kubenswrapper[4805]: I0217 00:32:46.933686 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" event={"ID":"35452466-502c-40f8-8b96-bf5ba6de3a8a","Type":"ContainerDied","Data":"4fb4e15febdfc3d442522b25142e816799c448c2018804329e7a02760f50846a"} Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.189225 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.324691 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck86c\" (UniqueName: \"kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c\") pod \"35452466-502c-40f8-8b96-bf5ba6de3a8a\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.324801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle\") pod \"35452466-502c-40f8-8b96-bf5ba6de3a8a\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.324824 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util\") pod \"35452466-502c-40f8-8b96-bf5ba6de3a8a\" (UID: \"35452466-502c-40f8-8b96-bf5ba6de3a8a\") " Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.327222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle" (OuterVolumeSpecName: "bundle") pod "35452466-502c-40f8-8b96-bf5ba6de3a8a" (UID: "35452466-502c-40f8-8b96-bf5ba6de3a8a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.334580 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c" (OuterVolumeSpecName: "kube-api-access-ck86c") pod "35452466-502c-40f8-8b96-bf5ba6de3a8a" (UID: "35452466-502c-40f8-8b96-bf5ba6de3a8a"). InnerVolumeSpecName "kube-api-access-ck86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.426127 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.426172 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck86c\" (UniqueName: \"kubernetes.io/projected/35452466-502c-40f8-8b96-bf5ba6de3a8a-kube-api-access-ck86c\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.568657 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util" (OuterVolumeSpecName: "util") pod "35452466-502c-40f8-8b96-bf5ba6de3a8a" (UID: "35452466-502c-40f8-8b96-bf5ba6de3a8a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.629157 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35452466-502c-40f8-8b96-bf5ba6de3a8a-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.950793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" event={"ID":"35452466-502c-40f8-8b96-bf5ba6de3a8a","Type":"ContainerDied","Data":"6d92f6ac85a9266f8a165beaab90642497969ac4c8a7d2d72859543fb5d2b1d8"} Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.950927 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d92f6ac85a9266f8a165beaab90642497969ac4c8a7d2d72859543fb5d2b1d8" Feb 17 00:32:48 crc kubenswrapper[4805]: I0217 00:32:48.950850 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.180253 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tbr6r"] Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.181404 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-controller" containerID="cri-o://608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.181902 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="northd" containerID="cri-o://c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.182148 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="nbdb" containerID="cri-o://0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.182244 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-node" containerID="cri-o://84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.182152 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="sbdb" containerID="cri-o://55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.182377 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.182419 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-acl-logging" containerID="cri-o://32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.217102 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" containerID="cri-o://944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" gracePeriod=30 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.554036 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/3.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.559627 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovn-acl-logging/0.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.560240 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovn-controller/0.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.560872 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.625698 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jz55b"] Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.625944 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="nbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.625963 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="nbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.625979 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.625988 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626002 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626011 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626026 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="util" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626034 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="util" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626045 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626242 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626295 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626370 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626383 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-acl-logging" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626392 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-acl-logging" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626405 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626412 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626421 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="sbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626431 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="sbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626443 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-node" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626452 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-node" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626464 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="northd" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626472 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="northd" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626482 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="pull" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626490 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="pull" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626500 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626510 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626522 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="extract" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626530 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="extract" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626541 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626549 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.626558 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kubecfg-setup" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626566 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kubecfg-setup" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626679 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-node" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626693 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="sbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626707 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626716 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="nbdb" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626726 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626736 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="35452466-502c-40f8-8b96-bf5ba6de3a8a" containerName="extract" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626745 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626754 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626763 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovn-acl-logging" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626774 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626785 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.626795 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="northd" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.627006 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerName="ovnkube-controller" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.633523 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.706855 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.706907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.706931 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.706954 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707102 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgww\" (UniqueName: \"kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log" (OuterVolumeSpecName: "node-log") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707467 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707604 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707720 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.707784 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708149 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708196 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708151 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708264 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708365 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash" (OuterVolumeSpecName: "host-slash") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708394 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708429 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket\") pod \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\" (UID: \"8d9024ef-7937-42b2-8fbc-60db984b9a2f\") " Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-etc-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708673 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket" (OuterVolumeSpecName: "log-socket") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-slash\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708756 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-netd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-systemd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708904 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-bin\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-kubelet\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.708979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-netns\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-env-overrides\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709181 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-ovn\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709227 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfbc\" (UniqueName: \"kubernetes.io/projected/03284984-4dcc-47a5-a417-f9f5682d7f0d-kube-api-access-brfbc\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-var-lib-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709440 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-config\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709659 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-node-log\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-systemd-units\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-script-lib\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.709907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-log-socket\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710021 4805 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710038 4805 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710050 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710059 4805 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710069 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710078 4805 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710086 4805 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710095 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710104 4805 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710113 4805 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710121 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710128 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710137 4805 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710144 4805 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710152 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710161 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.710169 4805 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.713943 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww" (OuterVolumeSpecName: "kube-api-access-bfgww") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "kube-api-access-bfgww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.714631 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.729288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8d9024ef-7937-42b2-8fbc-60db984b9a2f" (UID: "8d9024ef-7937-42b2-8fbc-60db984b9a2f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.811799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.811851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brfbc\" (UniqueName: \"kubernetes.io/projected/03284984-4dcc-47a5-a417-f9f5682d7f0d-kube-api-access-brfbc\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.811882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-var-lib-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.811927 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-config\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.811936 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-node-log\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-systemd-units\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812046 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-var-lib-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-node-log\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812074 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-script-lib\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812226 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.812341 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-systemd-units\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813073 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-script-lib\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-log-socket\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-log-socket\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovnkube-config\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-etc-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-slash\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-netd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813606 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-systemd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813680 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-etc-openvswitch\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813753 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-netd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-bin\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813780 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-slash\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-cni-bin\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-systemd\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-kubelet\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-kubelet\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813935 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-netns\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.813965 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-env-overrides\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-host-run-netns\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-ovn\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814042 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03284984-4dcc-47a5-a417-f9f5682d7f0d-run-ovn\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03284984-4dcc-47a5-a417-f9f5682d7f0d-env-overrides\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814496 4805 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d9024ef-7937-42b2-8fbc-60db984b9a2f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814530 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgww\" (UniqueName: \"kubernetes.io/projected/8d9024ef-7937-42b2-8fbc-60db984b9a2f-kube-api-access-bfgww\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.814547 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d9024ef-7937-42b2-8fbc-60db984b9a2f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.818217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03284984-4dcc-47a5-a417-f9f5682d7f0d-ovn-node-metrics-cert\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.836039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brfbc\" (UniqueName: \"kubernetes.io/projected/03284984-4dcc-47a5-a417-f9f5682d7f0d-kube-api-access-brfbc\") pod \"ovnkube-node-jz55b\" (UID: \"03284984-4dcc-47a5-a417-f9f5682d7f0d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.945745 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.982055 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/2.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.983123 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/1.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.983212 4805 generic.go:334] "Generic (PLEG): container finished" podID="5da6b304-e28f-4666-817f-06bcc107e3fe" containerID="123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406" exitCode=2 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.983303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerDied","Data":"123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.983376 4805 scope.go:117] "RemoveContainer" containerID="dcc16f54424be419535a037bae9b8bd277ef12dc81f826bb9b63728f4e35ff4f" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.984316 4805 scope.go:117] "RemoveContainer" containerID="123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406" Feb 17 00:32:53 crc kubenswrapper[4805]: E0217 00:32:53.984727 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lk6fw_openshift-multus(5da6b304-e28f-4666-817f-06bcc107e3fe)\"" pod="openshift-multus/multus-lk6fw" podUID="5da6b304-e28f-4666-817f-06bcc107e3fe" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.985004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"b44d0f2b4bb01b276e8c00cbedc412fe7f96a3c3691ecb7fc98d85e35e478151"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.988729 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovnkube-controller/3.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.991018 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovn-acl-logging/0.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.991714 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tbr6r_8d9024ef-7937-42b2-8fbc-60db984b9a2f/ovn-controller/0.log" Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992194 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992224 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992235 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992244 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992252 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992265 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" exitCode=0 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992274 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" exitCode=143 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992282 4805 generic.go:334] "Generic (PLEG): container finished" podID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" exitCode=143 Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992414 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992428 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992435 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992443 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992451 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992461 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992470 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992478 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992486 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992495 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992507 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992525 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992536 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992546 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992555 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992563 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992573 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992582 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992589 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992596 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992603 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992624 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992634 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992641 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992648 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992655 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992663 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992670 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992677 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992683 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992690 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992699 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" event={"ID":"8d9024ef-7937-42b2-8fbc-60db984b9a2f","Type":"ContainerDied","Data":"c09c210ac5d0e53e9f60e90bbffe5ae8b13f9b2dd1a44fe3519e6a52c3902fda"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992710 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992719 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992729 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992737 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992747 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992756 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992763 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992770 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992777 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992784 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} Feb 17 00:32:53 crc kubenswrapper[4805]: I0217 00:32:53.992446 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tbr6r" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.018271 4805 scope.go:117] "RemoveContainer" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.040768 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.059973 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tbr6r"] Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.066769 4805 scope.go:117] "RemoveContainer" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.067357 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tbr6r"] Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.082834 4805 scope.go:117] "RemoveContainer" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.102039 4805 scope.go:117] "RemoveContainer" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.122467 4805 scope.go:117] "RemoveContainer" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.167284 4805 scope.go:117] "RemoveContainer" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.180418 4805 scope.go:117] "RemoveContainer" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.192021 4805 scope.go:117] "RemoveContainer" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.213985 4805 scope.go:117] "RemoveContainer" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.228481 4805 scope.go:117] "RemoveContainer" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.228936 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": container with ID starting with 944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa not found: ID does not exist" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.228976 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} err="failed to get container status \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": rpc error: code = NotFound desc = could not find container \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": container with ID starting with 944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229003 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.229299 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": container with ID starting with 7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5 not found: ID does not exist" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229344 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} err="failed to get container status \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": rpc error: code = NotFound desc = could not find container \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": container with ID starting with 7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229372 4805 scope.go:117] "RemoveContainer" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.229575 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": container with ID starting with 55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9 not found: ID does not exist" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229597 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} err="failed to get container status \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": rpc error: code = NotFound desc = could not find container \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": container with ID starting with 55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229615 4805 scope.go:117] "RemoveContainer" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.229813 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": container with ID starting with 0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12 not found: ID does not exist" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229834 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} err="failed to get container status \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": rpc error: code = NotFound desc = could not find container \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": container with ID starting with 0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.229848 4805 scope.go:117] "RemoveContainer" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.230181 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": container with ID starting with c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7 not found: ID does not exist" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230204 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} err="failed to get container status \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": rpc error: code = NotFound desc = could not find container \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": container with ID starting with c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230219 4805 scope.go:117] "RemoveContainer" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.230514 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": container with ID starting with 639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6 not found: ID does not exist" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230555 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} err="failed to get container status \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": rpc error: code = NotFound desc = could not find container \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": container with ID starting with 639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230585 4805 scope.go:117] "RemoveContainer" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.230827 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": container with ID starting with 84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9 not found: ID does not exist" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230853 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} err="failed to get container status \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": rpc error: code = NotFound desc = could not find container \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": container with ID starting with 84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.230870 4805 scope.go:117] "RemoveContainer" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.231091 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": container with ID starting with 32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01 not found: ID does not exist" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.231114 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} err="failed to get container status \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": rpc error: code = NotFound desc = could not find container \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": container with ID starting with 32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.231128 4805 scope.go:117] "RemoveContainer" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.231307 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": container with ID starting with 608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3 not found: ID does not exist" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.231399 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} err="failed to get container status \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": rpc error: code = NotFound desc = could not find container \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": container with ID starting with 608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.231417 4805 scope.go:117] "RemoveContainer" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: E0217 00:32:54.239397 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": container with ID starting with ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd not found: ID does not exist" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.239443 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} err="failed to get container status \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": rpc error: code = NotFound desc = could not find container \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": container with ID starting with ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.239471 4805 scope.go:117] "RemoveContainer" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.239941 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} err="failed to get container status \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": rpc error: code = NotFound desc = could not find container \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": container with ID starting with 944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.239987 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240273 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} err="failed to get container status \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": rpc error: code = NotFound desc = could not find container \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": container with ID starting with 7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240292 4805 scope.go:117] "RemoveContainer" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240513 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} err="failed to get container status \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": rpc error: code = NotFound desc = could not find container \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": container with ID starting with 55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240533 4805 scope.go:117] "RemoveContainer" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240736 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} err="failed to get container status \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": rpc error: code = NotFound desc = could not find container \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": container with ID starting with 0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.240756 4805 scope.go:117] "RemoveContainer" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241013 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} err="failed to get container status \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": rpc error: code = NotFound desc = could not find container \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": container with ID starting with c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241042 4805 scope.go:117] "RemoveContainer" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241296 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} err="failed to get container status \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": rpc error: code = NotFound desc = could not find container \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": container with ID starting with 639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241317 4805 scope.go:117] "RemoveContainer" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241768 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} err="failed to get container status \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": rpc error: code = NotFound desc = could not find container \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": container with ID starting with 84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.241790 4805 scope.go:117] "RemoveContainer" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.243854 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} err="failed to get container status \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": rpc error: code = NotFound desc = could not find container \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": container with ID starting with 32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.243878 4805 scope.go:117] "RemoveContainer" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244079 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} err="failed to get container status \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": rpc error: code = NotFound desc = could not find container \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": container with ID starting with 608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244098 4805 scope.go:117] "RemoveContainer" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244341 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} err="failed to get container status \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": rpc error: code = NotFound desc = could not find container \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": container with ID starting with ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244363 4805 scope.go:117] "RemoveContainer" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244622 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} err="failed to get container status \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": rpc error: code = NotFound desc = could not find container \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": container with ID starting with 944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244668 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244895 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} err="failed to get container status \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": rpc error: code = NotFound desc = could not find container \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": container with ID starting with 7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.244915 4805 scope.go:117] "RemoveContainer" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245118 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} err="failed to get container status \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": rpc error: code = NotFound desc = could not find container \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": container with ID starting with 55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245140 4805 scope.go:117] "RemoveContainer" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245517 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} err="failed to get container status \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": rpc error: code = NotFound desc = could not find container \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": container with ID starting with 0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245551 4805 scope.go:117] "RemoveContainer" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245819 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} err="failed to get container status \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": rpc error: code = NotFound desc = could not find container \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": container with ID starting with c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.245860 4805 scope.go:117] "RemoveContainer" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247038 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} err="failed to get container status \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": rpc error: code = NotFound desc = could not find container \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": container with ID starting with 639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247062 4805 scope.go:117] "RemoveContainer" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247364 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} err="failed to get container status \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": rpc error: code = NotFound desc = could not find container \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": container with ID starting with 84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247391 4805 scope.go:117] "RemoveContainer" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247636 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} err="failed to get container status \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": rpc error: code = NotFound desc = could not find container \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": container with ID starting with 32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247657 4805 scope.go:117] "RemoveContainer" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247887 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} err="failed to get container status \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": rpc error: code = NotFound desc = could not find container \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": container with ID starting with 608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.247906 4805 scope.go:117] "RemoveContainer" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248169 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} err="failed to get container status \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": rpc error: code = NotFound desc = could not find container \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": container with ID starting with ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248198 4805 scope.go:117] "RemoveContainer" containerID="944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248472 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa"} err="failed to get container status \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": rpc error: code = NotFound desc = could not find container \"944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa\": container with ID starting with 944a73f5dbd27582f9f171cfefc734ed568f4f78a6390f9bcf727190f88a08fa not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248496 4805 scope.go:117] "RemoveContainer" containerID="7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248731 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5"} err="failed to get container status \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": rpc error: code = NotFound desc = could not find container \"7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5\": container with ID starting with 7d1f0b2195b9815906abcf08e462e8a61cadba04207d4fef3f669842164e8af5 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248750 4805 scope.go:117] "RemoveContainer" containerID="55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248937 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9"} err="failed to get container status \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": rpc error: code = NotFound desc = could not find container \"55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9\": container with ID starting with 55fe64c6310d6a518224606fe8d146648f3c1e4c22ff7df38f9b5f083ec3e6c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.248956 4805 scope.go:117] "RemoveContainer" containerID="0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249144 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12"} err="failed to get container status \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": rpc error: code = NotFound desc = could not find container \"0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12\": container with ID starting with 0b748558bd0aff828b0f24f7f355744083b4f42d4eee68f97ef18fc9be3a6f12 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249163 4805 scope.go:117] "RemoveContainer" containerID="c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249457 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7"} err="failed to get container status \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": rpc error: code = NotFound desc = could not find container \"c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7\": container with ID starting with c73b2ecc6f33e098e0eee7f2970fb62997e523fb0c9a6998aa671d0b6fb72cc7 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249479 4805 scope.go:117] "RemoveContainer" containerID="639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249696 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6"} err="failed to get container status \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": rpc error: code = NotFound desc = could not find container \"639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6\": container with ID starting with 639bdc53a73fb8088d2cd377f3c61c194fbccbc2b0fcf3df392601ec6a9f26b6 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.249720 4805 scope.go:117] "RemoveContainer" containerID="84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250042 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9"} err="failed to get container status \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": rpc error: code = NotFound desc = could not find container \"84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9\": container with ID starting with 84c92cd0490603ce0d6e8ba1440bfbee471c0401995dbf56a448fcdf0d2a87c9 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250065 4805 scope.go:117] "RemoveContainer" containerID="32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250281 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01"} err="failed to get container status \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": rpc error: code = NotFound desc = could not find container \"32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01\": container with ID starting with 32c8b977609a982909d505b97a5cb0b9f8608cc9ed8241369dfe780e28574b01 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250309 4805 scope.go:117] "RemoveContainer" containerID="608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250892 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3"} err="failed to get container status \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": rpc error: code = NotFound desc = could not find container \"608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3\": container with ID starting with 608d40d4135b30bff70a46434c299894b2b798a950535435a28dd1765a39a0c3 not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.250917 4805 scope.go:117] "RemoveContainer" containerID="ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.251120 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd"} err="failed to get container status \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": rpc error: code = NotFound desc = could not find container \"ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd\": container with ID starting with ef67c44d1a7e5791d31202174cd70d7e2f0969ce5beaf7d7ad96e3224290bdbd not found: ID does not exist" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.803625 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9024ef-7937-42b2-8fbc-60db984b9a2f" path="/var/lib/kubelet/pods/8d9024ef-7937-42b2-8fbc-60db984b9a2f/volumes" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.998380 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/2.log" Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.999484 4805 generic.go:334] "Generic (PLEG): container finished" podID="03284984-4dcc-47a5-a417-f9f5682d7f0d" containerID="64d21426b7fcc7c962425da5d971d453f77a1c67dd7ac9b3fc49f815e93a4912" exitCode=0 Feb 17 00:32:54 crc kubenswrapper[4805]: I0217 00:32:54.999526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerDied","Data":"64d21426b7fcc7c962425da5d971d453f77a1c67dd7ac9b3fc49f815e93a4912"} Feb 17 00:32:55 crc kubenswrapper[4805]: I0217 00:32:55.153310 4805 scope.go:117] "RemoveContainer" containerID="d8b7d77a933637ad8440cb18e43b6c9e0bda02216ee1b0888e8c3c9b0b819508" Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"87cb2f34030587e2173b84d1be9a4aa928b3f79dcbcad2ecd1fd49ecad785a72"} Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008944 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"a3515e497c41b338eb635a5c7f84943c69e5e4b571206d85437589288ef583ba"} Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"257baa9b274d9930a35ec4bd37aef89db7e0ca2448de9f5cba1c401515982261"} Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"8795866b91df52724ca63793e4f0cfedececded88939c2a231a6a03b918da326"} Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008976 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"45b6245918377399ed5e4fbb3a07441ab4f5bf3691d0727d2ff8b65b9222a716"} Feb 17 00:32:56 crc kubenswrapper[4805]: I0217 00:32:56.008985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"714ad1948804b83a05ad4d311bea1a6fdaaad70f9c5e18020e5c874a79daec04"} Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.026793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"017af6cf722eb91dd88ad73169f57b8342a044e6abff3e590b765ac541c56a86"} Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.075911 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6"] Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.076842 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.081198 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.081262 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.085991 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-dbjxh" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.186092 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84"] Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.186813 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.190551 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.190670 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-w6zcl" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.218806 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg"] Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.221398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.276719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktnz\" (UniqueName: \"kubernetes.io/projected/841806ee-4049-4561-b025-3af0469f8fb2-kube-api-access-xktnz\") pod \"obo-prometheus-operator-68bc856cb9-xw7l6\" (UID: \"841806ee-4049-4561-b025-3af0469f8fb2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.378131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.378179 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktnz\" (UniqueName: \"kubernetes.io/projected/841806ee-4049-4561-b025-3af0469f8fb2-kube-api-access-xktnz\") pod \"obo-prometheus-operator-68bc856cb9-xw7l6\" (UID: \"841806ee-4049-4561-b025-3af0469f8fb2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.378261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.378301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.378395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.385411 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rztzq"] Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.386224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.390017 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2gxhv" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.390263 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.396191 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktnz\" (UniqueName: \"kubernetes.io/projected/841806ee-4049-4561-b025-3af0469f8fb2-kube-api-access-xktnz\") pod \"obo-prometheus-operator-68bc856cb9-xw7l6\" (UID: \"841806ee-4049-4561-b025-3af0469f8fb2\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479387 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4wls\" (UniqueName: \"kubernetes.io/projected/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-kube-api-access-x4wls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.479510 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.483029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.483746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.495725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93be50de-fcd3-41d1-8641-1b7c73cb26ea-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg\" (UID: \"93be50de-fcd3-41d1-8641-1b7c73cb26ea\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.496993 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9f5bbbc-6740-427e-90d5-69011b2966cd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84\" (UID: \"c9f5bbbc-6740-427e-90d5-69011b2966cd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.499441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.501109 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-btvcr"] Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.501868 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.504622 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-9rgh7" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.525649 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(fe1a72bf6f8a56d0d9185ec43c9c162753de237bf86427e45f6055984d33af2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.525718 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(fe1a72bf6f8a56d0d9185ec43c9c162753de237bf86427e45f6055984d33af2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.525738 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(fe1a72bf6f8a56d0d9185ec43c9c162753de237bf86427e45f6055984d33af2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.525786 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(fe1a72bf6f8a56d0d9185ec43c9c162753de237bf86427e45f6055984d33af2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" podUID="c9f5bbbc-6740-427e-90d5-69011b2966cd" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.543532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.558104 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(f2400dc9ac5b4152000505fa911f39a9364eedffb35f80e46db33a65975eb3ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.558161 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(f2400dc9ac5b4152000505fa911f39a9364eedffb35f80e46db33a65975eb3ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.558180 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(f2400dc9ac5b4152000505fa911f39a9364eedffb35f80e46db33a65975eb3ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.558225 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(f2400dc9ac5b4152000505fa911f39a9364eedffb35f80e46db33a65975eb3ad): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" podUID="93be50de-fcd3-41d1-8641-1b7c73cb26ea" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.580545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.580620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4wls\" (UniqueName: \"kubernetes.io/projected/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-kube-api-access-x4wls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.584133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.598929 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4wls\" (UniqueName: \"kubernetes.io/projected/ec346c4e-f52f-4ee4-9697-e4b95405fe5d-kube-api-access-x4wls\") pod \"observability-operator-59bdc8b94-rztzq\" (UID: \"ec346c4e-f52f-4ee4-9697-e4b95405fe5d\") " pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.682184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b7fab38-3b46-42bc-a296-945f451f04f6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.682233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmcg\" (UniqueName: \"kubernetes.io/projected/6b7fab38-3b46-42bc-a296-945f451f04f6-kube-api-access-vcmcg\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.689225 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.707845 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(21c6b581b5f06795d369a33745ced18522a63eea2bb612c4344305980b272a04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.707913 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(21c6b581b5f06795d369a33745ced18522a63eea2bb612c4344305980b272a04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.707940 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(21c6b581b5f06795d369a33745ced18522a63eea2bb612c4344305980b272a04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.707993 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(21c6b581b5f06795d369a33745ced18522a63eea2bb612c4344305980b272a04): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" podUID="841806ee-4049-4561-b025-3af0469f8fb2" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.716394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.731154 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(6fb007d2ed3e3781983333143c9d3dd5e5daf71b0eacf3450983bc548dd1ce03): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.731215 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(6fb007d2ed3e3781983333143c9d3dd5e5daf71b0eacf3450983bc548dd1ce03): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.731239 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(6fb007d2ed3e3781983333143c9d3dd5e5daf71b0eacf3450983bc548dd1ce03): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.731283 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(6fb007d2ed3e3781983333143c9d3dd5e5daf71b0eacf3450983bc548dd1ce03): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" podUID="ec346c4e-f52f-4ee4-9697-e4b95405fe5d" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.783239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b7fab38-3b46-42bc-a296-945f451f04f6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.783298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmcg\" (UniqueName: \"kubernetes.io/projected/6b7fab38-3b46-42bc-a296-945f451f04f6-kube-api-access-vcmcg\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.784148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b7fab38-3b46-42bc-a296-945f451f04f6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.803719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmcg\" (UniqueName: \"kubernetes.io/projected/6b7fab38-3b46-42bc-a296-945f451f04f6-kube-api-access-vcmcg\") pod \"perses-operator-5bf474d74f-btvcr\" (UID: \"6b7fab38-3b46-42bc-a296-945f451f04f6\") " pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: I0217 00:32:59.852392 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.873116 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(e86be2230c75ca1de346143322348b56129c9da275e7aec4269d6fd093f2ba47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.873189 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(e86be2230c75ca1de346143322348b56129c9da275e7aec4269d6fd093f2ba47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.873217 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(e86be2230c75ca1de346143322348b56129c9da275e7aec4269d6fd093f2ba47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:32:59 crc kubenswrapper[4805]: E0217 00:32:59.873276 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(e86be2230c75ca1de346143322348b56129c9da275e7aec4269d6fd093f2ba47): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" podUID="6b7fab38-3b46-42bc-a296-945f451f04f6" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.038996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" event={"ID":"03284984-4dcc-47a5-a417-f9f5682d7f0d","Type":"ContainerStarted","Data":"f18b88fc18f026ca79c504c06307a20b6b9655633e102b33ac2fba2869d7b7e8"} Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.039400 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.039437 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.065654 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.071940 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" podStartSLOduration=8.071927656 podStartE2EDuration="8.071927656s" podCreationTimestamp="2026-02-17 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:33:01.069480206 +0000 UTC m=+607.085289604" watchObservedRunningTime="2026-02-17 00:33:01.071927656 +0000 UTC m=+607.087737054" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.342008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-btvcr"] Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.342182 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.342696 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.352187 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84"] Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.352287 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.352756 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.358793 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6"] Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.358877 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.359245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.384902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg"] Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.384994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.385385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.389275 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rztzq"] Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.389412 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:01 crc kubenswrapper[4805]: I0217 00:33:01.389840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.397645 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(f393ba39e517fb3071d6562b0bd4c4eb28d71ed12ddd9752ec32b9a41be81f13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.397708 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(f393ba39e517fb3071d6562b0bd4c4eb28d71ed12ddd9752ec32b9a41be81f13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.397732 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(f393ba39e517fb3071d6562b0bd4c4eb28d71ed12ddd9752ec32b9a41be81f13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.397779 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(f393ba39e517fb3071d6562b0bd4c4eb28d71ed12ddd9752ec32b9a41be81f13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" podUID="6b7fab38-3b46-42bc-a296-945f451f04f6" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.429035 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(bfc9cd3c8bfe2b9b3022063b2ccc813ae558c3f761a084108c13d7ce09ab5984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.429090 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(bfc9cd3c8bfe2b9b3022063b2ccc813ae558c3f761a084108c13d7ce09ab5984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.429113 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(bfc9cd3c8bfe2b9b3022063b2ccc813ae558c3f761a084108c13d7ce09ab5984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.429159 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(bfc9cd3c8bfe2b9b3022063b2ccc813ae558c3f761a084108c13d7ce09ab5984): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" podUID="c9f5bbbc-6740-427e-90d5-69011b2966cd" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.438742 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(bc6d668b505f15b25dda81615faa2433c61a42acd524446f8179998e1765bb13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.438805 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(bc6d668b505f15b25dda81615faa2433c61a42acd524446f8179998e1765bb13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.438825 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(bc6d668b505f15b25dda81615faa2433c61a42acd524446f8179998e1765bb13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.438870 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(bc6d668b505f15b25dda81615faa2433c61a42acd524446f8179998e1765bb13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" podUID="841806ee-4049-4561-b025-3af0469f8fb2" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.445233 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(7594ac07f694209e84ec3bd3bd98f2f384a010e6c6a10bba61590e206ab03b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.445270 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(7594ac07f694209e84ec3bd3bd98f2f384a010e6c6a10bba61590e206ab03b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.445287 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(7594ac07f694209e84ec3bd3bd98f2f384a010e6c6a10bba61590e206ab03b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.445343 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(7594ac07f694209e84ec3bd3bd98f2f384a010e6c6a10bba61590e206ab03b7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" podUID="93be50de-fcd3-41d1-8641-1b7c73cb26ea" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.459559 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(073856a531780db73120c55ec1a379ce57db8b0484db59fc8b2a12b003341e13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.459605 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(073856a531780db73120c55ec1a379ce57db8b0484db59fc8b2a12b003341e13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.459622 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(073856a531780db73120c55ec1a379ce57db8b0484db59fc8b2a12b003341e13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:01 crc kubenswrapper[4805]: E0217 00:33:01.459662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(073856a531780db73120c55ec1a379ce57db8b0484db59fc8b2a12b003341e13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" podUID="ec346c4e-f52f-4ee4-9697-e4b95405fe5d" Feb 17 00:33:02 crc kubenswrapper[4805]: I0217 00:33:02.045650 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:02 crc kubenswrapper[4805]: I0217 00:33:02.079240 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:07 crc kubenswrapper[4805]: I0217 00:33:07.785130 4805 scope.go:117] "RemoveContainer" containerID="123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406" Feb 17 00:33:07 crc kubenswrapper[4805]: E0217 00:33:07.785982 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lk6fw_openshift-multus(5da6b304-e28f-4666-817f-06bcc107e3fe)\"" pod="openshift-multus/multus-lk6fw" podUID="5da6b304-e28f-4666-817f-06bcc107e3fe" Feb 17 00:33:12 crc kubenswrapper[4805]: I0217 00:33:12.784494 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:12 crc kubenswrapper[4805]: I0217 00:33:12.785367 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:12 crc kubenswrapper[4805]: E0217 00:33:12.817624 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(9e503a50746fb003192d293d5ae2273a5f681b753a69ee58394fee46e9ba9001): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:12 crc kubenswrapper[4805]: E0217 00:33:12.817866 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(9e503a50746fb003192d293d5ae2273a5f681b753a69ee58394fee46e9ba9001): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:12 crc kubenswrapper[4805]: E0217 00:33:12.817888 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(9e503a50746fb003192d293d5ae2273a5f681b753a69ee58394fee46e9ba9001): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:12 crc kubenswrapper[4805]: E0217 00:33:12.817940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators(93be50de-fcd3-41d1-8641-1b7c73cb26ea)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_openshift-operators_93be50de-fcd3-41d1-8641-1b7c73cb26ea_0(9e503a50746fb003192d293d5ae2273a5f681b753a69ee58394fee46e9ba9001): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" podUID="93be50de-fcd3-41d1-8641-1b7c73cb26ea" Feb 17 00:33:13 crc kubenswrapper[4805]: I0217 00:33:13.783867 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:13 crc kubenswrapper[4805]: I0217 00:33:13.783892 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:13 crc kubenswrapper[4805]: I0217 00:33:13.784757 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:13 crc kubenswrapper[4805]: I0217 00:33:13.784806 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.828177 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(a053ae41e5a00e2bcbda40478151e1113d226ca00fa037bd9c54303592b66229): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.828241 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(a053ae41e5a00e2bcbda40478151e1113d226ca00fa037bd9c54303592b66229): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.828265 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(a053ae41e5a00e2bcbda40478151e1113d226ca00fa037bd9c54303592b66229): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.828314 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators(841806ee-4049-4561-b025-3af0469f8fb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-xw7l6_openshift-operators_841806ee-4049-4561-b025-3af0469f8fb2_0(a053ae41e5a00e2bcbda40478151e1113d226ca00fa037bd9c54303592b66229): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" podUID="841806ee-4049-4561-b025-3af0469f8fb2" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.868921 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(508c8f42de1467dedcce027ecb5ce672bd602c3ac0de4d3234df0bbd52e8032f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.868981 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(508c8f42de1467dedcce027ecb5ce672bd602c3ac0de4d3234df0bbd52e8032f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.869000 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(508c8f42de1467dedcce027ecb5ce672bd602c3ac0de4d3234df0bbd52e8032f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:13 crc kubenswrapper[4805]: E0217 00:33:13.869038 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators(c9f5bbbc-6740-427e-90d5-69011b2966cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_openshift-operators_c9f5bbbc-6740-427e-90d5-69011b2966cd_0(508c8f42de1467dedcce027ecb5ce672bd602c3ac0de4d3234df0bbd52e8032f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" podUID="c9f5bbbc-6740-427e-90d5-69011b2966cd" Feb 17 00:33:15 crc kubenswrapper[4805]: I0217 00:33:15.784448 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:15 crc kubenswrapper[4805]: I0217 00:33:15.784914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:15 crc kubenswrapper[4805]: I0217 00:33:15.785019 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:15 crc kubenswrapper[4805]: I0217 00:33:15.785164 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.813532 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(f52f6030ce91bcb2078f00c99ff5c978f4dab6135ff67a3fb46d0fbc1979f5b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.813603 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(f52f6030ce91bcb2078f00c99ff5c978f4dab6135ff67a3fb46d0fbc1979f5b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.813628 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(f52f6030ce91bcb2078f00c99ff5c978f4dab6135ff67a3fb46d0fbc1979f5b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.813677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-rztzq_openshift-operators(ec346c4e-f52f-4ee4-9697-e4b95405fe5d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-rztzq_openshift-operators_ec346c4e-f52f-4ee4-9697-e4b95405fe5d_0(f52f6030ce91bcb2078f00c99ff5c978f4dab6135ff67a3fb46d0fbc1979f5b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" podUID="ec346c4e-f52f-4ee4-9697-e4b95405fe5d" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.829187 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(3826b228cae2fe9ccc5976d85c702923e2ed58731b2abec41124d352029b7ca3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.829244 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(3826b228cae2fe9ccc5976d85c702923e2ed58731b2abec41124d352029b7ca3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.829272 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(3826b228cae2fe9ccc5976d85c702923e2ed58731b2abec41124d352029b7ca3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:15 crc kubenswrapper[4805]: E0217 00:33:15.829318 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-btvcr_openshift-operators(6b7fab38-3b46-42bc-a296-945f451f04f6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-btvcr_openshift-operators_6b7fab38-3b46-42bc-a296-945f451f04f6_0(3826b228cae2fe9ccc5976d85c702923e2ed58731b2abec41124d352029b7ca3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" podUID="6b7fab38-3b46-42bc-a296-945f451f04f6" Feb 17 00:33:21 crc kubenswrapper[4805]: I0217 00:33:21.784806 4805 scope.go:117] "RemoveContainer" containerID="123d9a27d0d9e8003b08e74a0e80d8cc248675429f1601cb9849bdeec682f406" Feb 17 00:33:22 crc kubenswrapper[4805]: I0217 00:33:22.175549 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lk6fw_5da6b304-e28f-4666-817f-06bcc107e3fe/kube-multus/2.log" Feb 17 00:33:22 crc kubenswrapper[4805]: I0217 00:33:22.175860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lk6fw" event={"ID":"5da6b304-e28f-4666-817f-06bcc107e3fe","Type":"ContainerStarted","Data":"eb62a505531bdd6ea1e141139903a7a6973fb1ff38e64664181d0234b80b8b94"} Feb 17 00:33:23 crc kubenswrapper[4805]: I0217 00:33:23.965743 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jz55b" Feb 17 00:33:25 crc kubenswrapper[4805]: I0217 00:33:25.784836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:25 crc kubenswrapper[4805]: I0217 00:33:25.785221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:25 crc kubenswrapper[4805]: I0217 00:33:25.786179 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" Feb 17 00:33:25 crc kubenswrapper[4805]: I0217 00:33:25.786481 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" Feb 17 00:33:26 crc kubenswrapper[4805]: I0217 00:33:26.068116 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg"] Feb 17 00:33:26 crc kubenswrapper[4805]: I0217 00:33:26.117335 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6"] Feb 17 00:33:26 crc kubenswrapper[4805]: I0217 00:33:26.202048 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" event={"ID":"841806ee-4049-4561-b025-3af0469f8fb2","Type":"ContainerStarted","Data":"81b2dfc1c11b1318dbf9b6a9f16158e9bb77ac438a78aafc97c89472827898cb"} Feb 17 00:33:26 crc kubenswrapper[4805]: I0217 00:33:26.210598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" event={"ID":"93be50de-fcd3-41d1-8641-1b7c73cb26ea","Type":"ContainerStarted","Data":"874e448e979b71c821afd3f7dd105ba9e4993c3450fde4eec7c92f25665b1712"} Feb 17 00:33:28 crc kubenswrapper[4805]: I0217 00:33:28.783828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:28 crc kubenswrapper[4805]: I0217 00:33:28.784665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" Feb 17 00:33:30 crc kubenswrapper[4805]: I0217 00:33:30.784493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:30 crc kubenswrapper[4805]: I0217 00:33:30.785251 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:30 crc kubenswrapper[4805]: I0217 00:33:30.784546 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:30 crc kubenswrapper[4805]: I0217 00:33:30.785617 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:30 crc kubenswrapper[4805]: I0217 00:33:30.813824 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84"] Feb 17 00:33:31 crc kubenswrapper[4805]: I0217 00:33:31.240871 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" event={"ID":"c9f5bbbc-6740-427e-90d5-69011b2966cd","Type":"ContainerStarted","Data":"74bd8ce37d50b06351eb2d22945fca014bc2d59abddac37794db0af73dae56c5"} Feb 17 00:33:31 crc kubenswrapper[4805]: I0217 00:33:31.381861 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-btvcr"] Feb 17 00:33:31 crc kubenswrapper[4805]: W0217 00:33:31.393814 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b7fab38_3b46_42bc_a296_945f451f04f6.slice/crio-ff70c86ec5b23134b853a7d4e025def8ae6274b99083bacccc5c5e2a1b2cf802 WatchSource:0}: Error finding container ff70c86ec5b23134b853a7d4e025def8ae6274b99083bacccc5c5e2a1b2cf802: Status 404 returned error can't find the container with id ff70c86ec5b23134b853a7d4e025def8ae6274b99083bacccc5c5e2a1b2cf802 Feb 17 00:33:31 crc kubenswrapper[4805]: I0217 00:33:31.435546 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-rztzq"] Feb 17 00:33:31 crc kubenswrapper[4805]: W0217 00:33:31.439761 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec346c4e_f52f_4ee4_9697_e4b95405fe5d.slice/crio-72014c2ffff3eb4ea7374b737b5047a5b80e74b6ea3ff6497a55061442aee818 WatchSource:0}: Error finding container 72014c2ffff3eb4ea7374b737b5047a5b80e74b6ea3ff6497a55061442aee818: Status 404 returned error can't find the container with id 72014c2ffff3eb4ea7374b737b5047a5b80e74b6ea3ff6497a55061442aee818 Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.256513 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" event={"ID":"6b7fab38-3b46-42bc-a296-945f451f04f6","Type":"ContainerStarted","Data":"ff70c86ec5b23134b853a7d4e025def8ae6274b99083bacccc5c5e2a1b2cf802"} Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.267123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" event={"ID":"93be50de-fcd3-41d1-8641-1b7c73cb26ea","Type":"ContainerStarted","Data":"db8df7dc59d1160a20632e58cb1cdfd35e0ae7c87ab9037e36c70e00e9013ae9"} Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.275900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" event={"ID":"841806ee-4049-4561-b025-3af0469f8fb2","Type":"ContainerStarted","Data":"88dc51596ef4cd07f348f827b9833a2a4853ba910cb01c2a7085cf3c71f6e486"} Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.284578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" event={"ID":"ec346c4e-f52f-4ee4-9697-e4b95405fe5d","Type":"ContainerStarted","Data":"72014c2ffff3eb4ea7374b737b5047a5b80e74b6ea3ff6497a55061442aee818"} Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.309045 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg" podStartSLOduration=28.237821469 podStartE2EDuration="33.309018644s" podCreationTimestamp="2026-02-17 00:32:59 +0000 UTC" firstStartedPulling="2026-02-17 00:33:26.08087501 +0000 UTC m=+632.096684418" lastFinishedPulling="2026-02-17 00:33:31.152072195 +0000 UTC m=+637.167881593" observedRunningTime="2026-02-17 00:33:32.302358752 +0000 UTC m=+638.318168170" watchObservedRunningTime="2026-02-17 00:33:32.309018644 +0000 UTC m=+638.324828062" Feb 17 00:33:32 crc kubenswrapper[4805]: I0217 00:33:32.382204 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-xw7l6" podStartSLOduration=28.356383325 podStartE2EDuration="33.382185376s" podCreationTimestamp="2026-02-17 00:32:59 +0000 UTC" firstStartedPulling="2026-02-17 00:33:26.126704317 +0000 UTC m=+632.142513715" lastFinishedPulling="2026-02-17 00:33:31.152506368 +0000 UTC m=+637.168315766" observedRunningTime="2026-02-17 00:33:32.33324176 +0000 UTC m=+638.349051168" watchObservedRunningTime="2026-02-17 00:33:32.382185376 +0000 UTC m=+638.397994774" Feb 17 00:33:33 crc kubenswrapper[4805]: I0217 00:33:33.293908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" event={"ID":"c9f5bbbc-6740-427e-90d5-69011b2966cd","Type":"ContainerStarted","Data":"18a85b7b542878fc9ebf2ba483261dfc1875de5fa32d5de4d82490cabff50229"} Feb 17 00:33:34 crc kubenswrapper[4805]: I0217 00:33:34.809669 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84" podStartSLOduration=34.699286277 podStartE2EDuration="35.809648837s" podCreationTimestamp="2026-02-17 00:32:59 +0000 UTC" firstStartedPulling="2026-02-17 00:33:31.104646413 +0000 UTC m=+637.120455801" lastFinishedPulling="2026-02-17 00:33:32.215008953 +0000 UTC m=+638.230818361" observedRunningTime="2026-02-17 00:33:33.318042183 +0000 UTC m=+639.333851591" watchObservedRunningTime="2026-02-17 00:33:34.809648837 +0000 UTC m=+640.825458245" Feb 17 00:33:35 crc kubenswrapper[4805]: I0217 00:33:35.308670 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" event={"ID":"6b7fab38-3b46-42bc-a296-945f451f04f6","Type":"ContainerStarted","Data":"2fab539b0431961a85d858d27d80b792e6cc5ab9e0adf7b603f3ddaa4694239b"} Feb 17 00:33:35 crc kubenswrapper[4805]: I0217 00:33:35.308823 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:35 crc kubenswrapper[4805]: I0217 00:33:35.327227 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" podStartSLOduration=33.384343069 podStartE2EDuration="36.327208427s" podCreationTimestamp="2026-02-17 00:32:59 +0000 UTC" firstStartedPulling="2026-02-17 00:33:31.397788965 +0000 UTC m=+637.413598363" lastFinishedPulling="2026-02-17 00:33:34.340654313 +0000 UTC m=+640.356463721" observedRunningTime="2026-02-17 00:33:35.32696966 +0000 UTC m=+641.342779058" watchObservedRunningTime="2026-02-17 00:33:35.327208427 +0000 UTC m=+641.343017825" Feb 17 00:33:37 crc kubenswrapper[4805]: I0217 00:33:37.340103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" event={"ID":"ec346c4e-f52f-4ee4-9697-e4b95405fe5d","Type":"ContainerStarted","Data":"6ed6400cc670816eab9caba8f102b6cb31621742cd76c94bf367aa394c94f431"} Feb 17 00:33:37 crc kubenswrapper[4805]: I0217 00:33:37.340699 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:37 crc kubenswrapper[4805]: I0217 00:33:37.354782 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" Feb 17 00:33:37 crc kubenswrapper[4805]: I0217 00:33:37.376958 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-rztzq" podStartSLOduration=33.238404986 podStartE2EDuration="38.376937895s" podCreationTimestamp="2026-02-17 00:32:59 +0000 UTC" firstStartedPulling="2026-02-17 00:33:31.442420157 +0000 UTC m=+637.458229555" lastFinishedPulling="2026-02-17 00:33:36.580953056 +0000 UTC m=+642.596762464" observedRunningTime="2026-02-17 00:33:37.371821278 +0000 UTC m=+643.387630746" watchObservedRunningTime="2026-02-17 00:33:37.376937895 +0000 UTC m=+643.392747303" Feb 17 00:33:39 crc kubenswrapper[4805]: I0217 00:33:39.856128 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-btvcr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.246791 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.247679 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.250943 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.251123 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sz2tr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.251180 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.251246 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5zk\" (UniqueName: \"kubernetes.io/projected/0fd44ff9-92b9-4699-8435-a98175b3437e-kube-api-access-lm5zk\") pod \"cert-manager-cainjector-cf98fcc89-l9t8q\" (UID: \"0fd44ff9-92b9-4699-8435-a98175b3437e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.253075 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2h6qr"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.253949 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2h6qr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.256465 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.260068 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cj997" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.269782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2h6qr"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.275696 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6jgm7"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.276331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.280698 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fhf7p" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.289415 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6jgm7"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.352666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5zk\" (UniqueName: \"kubernetes.io/projected/0fd44ff9-92b9-4699-8435-a98175b3437e-kube-api-access-lm5zk\") pod \"cert-manager-cainjector-cf98fcc89-l9t8q\" (UID: \"0fd44ff9-92b9-4699-8435-a98175b3437e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.352729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwjc\" (UniqueName: \"kubernetes.io/projected/b3a98919-e2b8-4289-a46c-834a0c1f2460-kube-api-access-6jwjc\") pod \"cert-manager-858654f9db-2h6qr\" (UID: \"b3a98919-e2b8-4289-a46c-834a0c1f2460\") " pod="cert-manager/cert-manager-858654f9db-2h6qr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.370256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5zk\" (UniqueName: \"kubernetes.io/projected/0fd44ff9-92b9-4699-8435-a98175b3437e-kube-api-access-lm5zk\") pod \"cert-manager-cainjector-cf98fcc89-l9t8q\" (UID: \"0fd44ff9-92b9-4699-8435-a98175b3437e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.453461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwjc\" (UniqueName: \"kubernetes.io/projected/b3a98919-e2b8-4289-a46c-834a0c1f2460-kube-api-access-6jwjc\") pod \"cert-manager-858654f9db-2h6qr\" (UID: \"b3a98919-e2b8-4289-a46c-834a0c1f2460\") " pod="cert-manager/cert-manager-858654f9db-2h6qr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.453512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtnps\" (UniqueName: \"kubernetes.io/projected/212fc243-8a59-46c7-9885-ef307f45edaa-kube-api-access-rtnps\") pod \"cert-manager-webhook-687f57d79b-6jgm7\" (UID: \"212fc243-8a59-46c7-9885-ef307f45edaa\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.470777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwjc\" (UniqueName: \"kubernetes.io/projected/b3a98919-e2b8-4289-a46c-834a0c1f2460-kube-api-access-6jwjc\") pod \"cert-manager-858654f9db-2h6qr\" (UID: \"b3a98919-e2b8-4289-a46c-834a0c1f2460\") " pod="cert-manager/cert-manager-858654f9db-2h6qr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.554347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtnps\" (UniqueName: \"kubernetes.io/projected/212fc243-8a59-46c7-9885-ef307f45edaa-kube-api-access-rtnps\") pod \"cert-manager-webhook-687f57d79b-6jgm7\" (UID: \"212fc243-8a59-46c7-9885-ef307f45edaa\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.573573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtnps\" (UniqueName: \"kubernetes.io/projected/212fc243-8a59-46c7-9885-ef307f45edaa-kube-api-access-rtnps\") pod \"cert-manager-webhook-687f57d79b-6jgm7\" (UID: \"212fc243-8a59-46c7-9885-ef307f45edaa\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.609190 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.617286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2h6qr" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.626819 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.839701 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.870453 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6jgm7"] Feb 17 00:33:46 crc kubenswrapper[4805]: I0217 00:33:46.912781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2h6qr"] Feb 17 00:33:47 crc kubenswrapper[4805]: I0217 00:33:47.399752 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2h6qr" event={"ID":"b3a98919-e2b8-4289-a46c-834a0c1f2460","Type":"ContainerStarted","Data":"f2286d349b80489cf4549bcfed1e3b454e62c5d1cdad8e83dbfe7fdad8218de6"} Feb 17 00:33:47 crc kubenswrapper[4805]: I0217 00:33:47.401576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" event={"ID":"0fd44ff9-92b9-4699-8435-a98175b3437e","Type":"ContainerStarted","Data":"485c1d67e584648aecf3b3240207dcf49e2b924261bd1d8cc874a835f64e2a29"} Feb 17 00:33:47 crc kubenswrapper[4805]: I0217 00:33:47.402781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" event={"ID":"212fc243-8a59-46c7-9885-ef307f45edaa","Type":"ContainerStarted","Data":"30fd0f5e0aec6109ef86274d37838a16c0f213eb2b05880b94bb2401a3d5ff5b"} Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.428740 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2h6qr" event={"ID":"b3a98919-e2b8-4289-a46c-834a0c1f2460","Type":"ContainerStarted","Data":"86be28d4fd6f1ca49cf23dbe57f820cdb99a204462975f81d71b4c2c58c743a2"} Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.430793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" event={"ID":"0fd44ff9-92b9-4699-8435-a98175b3437e","Type":"ContainerStarted","Data":"075ae9fb39110bac663cbda70e29ae0a8429609f507f51db3ebb7d4021a10255"} Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.432098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" event={"ID":"212fc243-8a59-46c7-9885-ef307f45edaa","Type":"ContainerStarted","Data":"70ef9578762f91587dabd8127f1ea1378dc7c70674f1095e17c5c4af2a84475d"} Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.432252 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.442609 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2h6qr" podStartSLOduration=1.497718984 podStartE2EDuration="5.442594799s" podCreationTimestamp="2026-02-17 00:33:46 +0000 UTC" firstStartedPulling="2026-02-17 00:33:46.923598315 +0000 UTC m=+652.939407713" lastFinishedPulling="2026-02-17 00:33:50.86847412 +0000 UTC m=+656.884283528" observedRunningTime="2026-02-17 00:33:51.441516859 +0000 UTC m=+657.457326257" watchObservedRunningTime="2026-02-17 00:33:51.442594799 +0000 UTC m=+657.458404197" Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.458639 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" podStartSLOduration=1.467833787 podStartE2EDuration="5.458623178s" podCreationTimestamp="2026-02-17 00:33:46 +0000 UTC" firstStartedPulling="2026-02-17 00:33:46.877301798 +0000 UTC m=+652.893111196" lastFinishedPulling="2026-02-17 00:33:50.868091189 +0000 UTC m=+656.883900587" observedRunningTime="2026-02-17 00:33:51.454962026 +0000 UTC m=+657.470771444" watchObservedRunningTime="2026-02-17 00:33:51.458623178 +0000 UTC m=+657.474432576" Feb 17 00:33:51 crc kubenswrapper[4805]: I0217 00:33:51.486115 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-l9t8q" podStartSLOduration=1.403564968 podStartE2EDuration="5.486098348s" podCreationTimestamp="2026-02-17 00:33:46 +0000 UTC" firstStartedPulling="2026-02-17 00:33:46.848179983 +0000 UTC m=+652.863989381" lastFinishedPulling="2026-02-17 00:33:50.930713343 +0000 UTC m=+656.946522761" observedRunningTime="2026-02-17 00:33:51.478582287 +0000 UTC m=+657.494391685" watchObservedRunningTime="2026-02-17 00:33:51.486098348 +0000 UTC m=+657.501907746" Feb 17 00:33:56 crc kubenswrapper[4805]: I0217 00:33:56.630883 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-6jgm7" Feb 17 00:34:22 crc kubenswrapper[4805]: I0217 00:34:22.989834 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq"] Feb 17 00:34:22 crc kubenswrapper[4805]: I0217 00:34:22.991570 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:22 crc kubenswrapper[4805]: I0217 00:34:22.993951 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.017060 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq"] Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.076830 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.076890 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.114098 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz"] Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.115378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.124148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz"] Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.142143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.142221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.142292 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lglh\" (UniqueName: \"kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgcc\" (UniqueName: \"kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lglh\" (UniqueName: \"kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.244987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.245017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.264629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lglh\" (UniqueName: \"kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.306446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.345847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.346226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.346261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgcc\" (UniqueName: \"kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.346819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.347017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.367441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgcc\" (UniqueName: \"kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.434780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.545552 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq"] Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.639676 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz"] Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.660201 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" event={"ID":"d8cf55d6-2938-4730-bacd-f6bdbb287fca","Type":"ContainerStarted","Data":"896fb10d28f97164743f82b32c19024651c28368ec313f7b2c69a8bd263ea83f"} Feb 17 00:34:23 crc kubenswrapper[4805]: I0217 00:34:23.661496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" event={"ID":"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e","Type":"ContainerStarted","Data":"ad978c967cc241b1834d7aa076d3060ceedc7c4ae095672ecedeb823ee9e0b47"} Feb 17 00:34:23 crc kubenswrapper[4805]: E0217 00:34:23.875641 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a140ec1_85c5_4c6f_86cc_a14c6ecd120e.slice/crio-conmon-e27f77a31ad692e22e82a4939660458648d60cc183e2a7d1c5742fe6e3a4ec49.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cf55d6_2938_4730_bacd_f6bdbb287fca.slice/crio-79522623d7fdf36b14d433c5a04fdf7f42a32ecb4813da18e36fc3d971297424.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cf55d6_2938_4730_bacd_f6bdbb287fca.slice/crio-conmon-79522623d7fdf36b14d433c5a04fdf7f42a32ecb4813da18e36fc3d971297424.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:34:24 crc kubenswrapper[4805]: I0217 00:34:24.672122 4805 generic.go:334] "Generic (PLEG): container finished" podID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerID="79522623d7fdf36b14d433c5a04fdf7f42a32ecb4813da18e36fc3d971297424" exitCode=0 Feb 17 00:34:24 crc kubenswrapper[4805]: I0217 00:34:24.672166 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" event={"ID":"d8cf55d6-2938-4730-bacd-f6bdbb287fca","Type":"ContainerDied","Data":"79522623d7fdf36b14d433c5a04fdf7f42a32ecb4813da18e36fc3d971297424"} Feb 17 00:34:24 crc kubenswrapper[4805]: I0217 00:34:24.674791 4805 generic.go:334] "Generic (PLEG): container finished" podID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerID="e27f77a31ad692e22e82a4939660458648d60cc183e2a7d1c5742fe6e3a4ec49" exitCode=0 Feb 17 00:34:24 crc kubenswrapper[4805]: I0217 00:34:24.674859 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" event={"ID":"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e","Type":"ContainerDied","Data":"e27f77a31ad692e22e82a4939660458648d60cc183e2a7d1c5742fe6e3a4ec49"} Feb 17 00:34:26 crc kubenswrapper[4805]: I0217 00:34:26.706346 4805 generic.go:334] "Generic (PLEG): container finished" podID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerID="5c60b2339beced34a3c6c1c6ce1978ed9732ebb84fc3db18bbd3264aa1ee43aa" exitCode=0 Feb 17 00:34:26 crc kubenswrapper[4805]: I0217 00:34:26.706456 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" event={"ID":"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e","Type":"ContainerDied","Data":"5c60b2339beced34a3c6c1c6ce1978ed9732ebb84fc3db18bbd3264aa1ee43aa"} Feb 17 00:34:26 crc kubenswrapper[4805]: I0217 00:34:26.708703 4805 generic.go:334] "Generic (PLEG): container finished" podID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerID="62ad33c0857ae49d0f5afb3afd87222fbf0ff8f8da99f5076e709528f093b38d" exitCode=0 Feb 17 00:34:26 crc kubenswrapper[4805]: I0217 00:34:26.708764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" event={"ID":"d8cf55d6-2938-4730-bacd-f6bdbb287fca","Type":"ContainerDied","Data":"62ad33c0857ae49d0f5afb3afd87222fbf0ff8f8da99f5076e709528f093b38d"} Feb 17 00:34:27 crc kubenswrapper[4805]: I0217 00:34:27.719031 4805 generic.go:334] "Generic (PLEG): container finished" podID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerID="478bcd429e94696354f741666cdca659bf344bdf3dafdbce496790c33ea363f3" exitCode=0 Feb 17 00:34:27 crc kubenswrapper[4805]: I0217 00:34:27.719147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" event={"ID":"d8cf55d6-2938-4730-bacd-f6bdbb287fca","Type":"ContainerDied","Data":"478bcd429e94696354f741666cdca659bf344bdf3dafdbce496790c33ea363f3"} Feb 17 00:34:27 crc kubenswrapper[4805]: I0217 00:34:27.722170 4805 generic.go:334] "Generic (PLEG): container finished" podID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerID="9b8d3d59481994383705d7c896df4b5f686a3045f921d1cf643352aee457e6f8" exitCode=0 Feb 17 00:34:27 crc kubenswrapper[4805]: I0217 00:34:27.722248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" event={"ID":"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e","Type":"ContainerDied","Data":"9b8d3d59481994383705d7c896df4b5f686a3045f921d1cf643352aee457e6f8"} Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.036030 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.042690 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144427 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqgcc\" (UniqueName: \"kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc\") pod \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util\") pod \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle\") pod \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144590 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle\") pod \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util\") pod \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\" (UID: \"d8cf55d6-2938-4730-bacd-f6bdbb287fca\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.144710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lglh\" (UniqueName: \"kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh\") pod \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\" (UID: \"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e\") " Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.145625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle" (OuterVolumeSpecName: "bundle") pod "d8cf55d6-2938-4730-bacd-f6bdbb287fca" (UID: "d8cf55d6-2938-4730-bacd-f6bdbb287fca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.146366 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle" (OuterVolumeSpecName: "bundle") pod "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" (UID: "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.150507 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc" (OuterVolumeSpecName: "kube-api-access-mqgcc") pod "d8cf55d6-2938-4730-bacd-f6bdbb287fca" (UID: "d8cf55d6-2938-4730-bacd-f6bdbb287fca"). InnerVolumeSpecName "kube-api-access-mqgcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.150547 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh" (OuterVolumeSpecName: "kube-api-access-5lglh") pod "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" (UID: "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e"). InnerVolumeSpecName "kube-api-access-5lglh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.246045 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.246084 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lglh\" (UniqueName: \"kubernetes.io/projected/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-kube-api-access-5lglh\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.246098 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqgcc\" (UniqueName: \"kubernetes.io/projected/d8cf55d6-2938-4730-bacd-f6bdbb287fca-kube-api-access-mqgcc\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.246113 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.739158 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" event={"ID":"5a140ec1-85c5-4c6f-86cc-a14c6ecd120e","Type":"ContainerDied","Data":"ad978c967cc241b1834d7aa076d3060ceedc7c4ae095672ecedeb823ee9e0b47"} Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.739468 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad978c967cc241b1834d7aa076d3060ceedc7c4ae095672ecedeb823ee9e0b47" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.739220 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.741764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" event={"ID":"d8cf55d6-2938-4730-bacd-f6bdbb287fca","Type":"ContainerDied","Data":"896fb10d28f97164743f82b32c19024651c28368ec313f7b2c69a8bd263ea83f"} Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.741802 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896fb10d28f97164743f82b32c19024651c28368ec313f7b2c69a8bd263ea83f" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.741813 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.875043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util" (OuterVolumeSpecName: "util") pod "d8cf55d6-2938-4730-bacd-f6bdbb287fca" (UID: "d8cf55d6-2938-4730-bacd-f6bdbb287fca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.901392 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util" (OuterVolumeSpecName: "util") pod "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" (UID: "5a140ec1-85c5-4c6f-86cc-a14c6ecd120e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.955402 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d8cf55d6-2938-4730-bacd-f6bdbb287fca-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:29 crc kubenswrapper[4805]: I0217 00:34:29.955441 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5a140ec1-85c5-4c6f-86cc-a14c6ecd120e-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.732162 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh"] Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733274 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="util" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733298 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="util" Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733317 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="util" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733362 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="util" Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733377 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="pull" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733386 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="pull" Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733398 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733407 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733425 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733436 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: E0217 00:34:40.733453 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="pull" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733462 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="pull" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733635 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a140ec1-85c5-4c6f-86cc-a14c6ecd120e" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.733661 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cf55d6-2938-4730-bacd-f6bdbb287fca" containerName="extract" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.735814 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738007 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738071 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738008 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738868 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-mxzxr" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.738936 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.765167 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh"] Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.798017 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvbrr\" (UniqueName: \"kubernetes.io/projected/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-kube-api-access-xvbrr\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.798289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-manager-config\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.798417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-webhook-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.798489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-apiservice-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.798555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.900003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-manager-config\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.900062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-webhook-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.900086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-apiservice-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.900111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.900148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvbrr\" (UniqueName: \"kubernetes.io/projected/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-kube-api-access-xvbrr\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.901052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-manager-config\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.906746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-webhook-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.907505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.907749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-apiservice-cert\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:40 crc kubenswrapper[4805]: I0217 00:34:40.924925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvbrr\" (UniqueName: \"kubernetes.io/projected/b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01-kube-api-access-xvbrr\") pod \"loki-operator-controller-manager-5659c765-xsxhh\" (UID: \"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:41 crc kubenswrapper[4805]: I0217 00:34:41.050031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:41 crc kubenswrapper[4805]: I0217 00:34:41.252822 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh"] Feb 17 00:34:41 crc kubenswrapper[4805]: I0217 00:34:41.837308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" event={"ID":"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01","Type":"ContainerStarted","Data":"5ef96c77a76df511e895640bc5710efa2cd070d82b18d96d17fa93dafbc02c3d"} Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.896217 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nhjhc"] Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.897371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.901927 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.903144 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.903554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-b4299" Feb 17 00:34:42 crc kubenswrapper[4805]: I0217 00:34:42.912682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nhjhc"] Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.027781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kt4\" (UniqueName: \"kubernetes.io/projected/cb0332e7-6f7b-4294-878c-85fc89493a58-kube-api-access-f5kt4\") pod \"cluster-logging-operator-c769fd969-nhjhc\" (UID: \"cb0332e7-6f7b-4294-878c-85fc89493a58\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.129237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kt4\" (UniqueName: \"kubernetes.io/projected/cb0332e7-6f7b-4294-878c-85fc89493a58-kube-api-access-f5kt4\") pod \"cluster-logging-operator-c769fd969-nhjhc\" (UID: \"cb0332e7-6f7b-4294-878c-85fc89493a58\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.156615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kt4\" (UniqueName: \"kubernetes.io/projected/cb0332e7-6f7b-4294-878c-85fc89493a58-kube-api-access-f5kt4\") pod \"cluster-logging-operator-c769fd969-nhjhc\" (UID: \"cb0332e7-6f7b-4294-878c-85fc89493a58\") " pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.217539 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.409705 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-nhjhc"] Feb 17 00:34:43 crc kubenswrapper[4805]: I0217 00:34:43.850602 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" event={"ID":"cb0332e7-6f7b-4294-878c-85fc89493a58","Type":"ContainerStarted","Data":"4237a2be1287787040e22b59445a5a92ae4b14ca947e788339aba668fb3b274b"} Feb 17 00:34:50 crc kubenswrapper[4805]: I0217 00:34:50.927214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" event={"ID":"cb0332e7-6f7b-4294-878c-85fc89493a58","Type":"ContainerStarted","Data":"cbc3b86783e0d86a0758760034666e338302b2e62c0b3a7ed16538e7214fe9f6"} Feb 17 00:34:50 crc kubenswrapper[4805]: I0217 00:34:50.929039 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" event={"ID":"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01","Type":"ContainerStarted","Data":"b9966792ace16af8ab32511a66669c92239a6ab6df6f16dd9d03ca8ef36e5cac"} Feb 17 00:34:50 crc kubenswrapper[4805]: I0217 00:34:50.946684 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-nhjhc" podStartSLOduration=2.566520736 podStartE2EDuration="8.946661549s" podCreationTimestamp="2026-02-17 00:34:42 +0000 UTC" firstStartedPulling="2026-02-17 00:34:43.42930313 +0000 UTC m=+709.445112538" lastFinishedPulling="2026-02-17 00:34:49.809443953 +0000 UTC m=+715.825253351" observedRunningTime="2026-02-17 00:34:50.942440535 +0000 UTC m=+716.958249943" watchObservedRunningTime="2026-02-17 00:34:50.946661549 +0000 UTC m=+716.962470947" Feb 17 00:34:53 crc kubenswrapper[4805]: I0217 00:34:53.077520 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:34:53 crc kubenswrapper[4805]: I0217 00:34:53.077829 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:34:56 crc kubenswrapper[4805]: I0217 00:34:56.968238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" event={"ID":"b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01","Type":"ContainerStarted","Data":"d2673553409d3400edeb0fbc7d2c55c3f90aa0792d4d5ab5c943d2d2263496f6"} Feb 17 00:34:56 crc kubenswrapper[4805]: I0217 00:34:56.969095 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:56 crc kubenswrapper[4805]: I0217 00:34:56.971595 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" Feb 17 00:34:57 crc kubenswrapper[4805]: I0217 00:34:57.019587 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5659c765-xsxhh" podStartSLOduration=1.691726419 podStartE2EDuration="17.019567429s" podCreationTimestamp="2026-02-17 00:34:40 +0000 UTC" firstStartedPulling="2026-02-17 00:34:41.262270156 +0000 UTC m=+707.278079554" lastFinishedPulling="2026-02-17 00:34:56.590111166 +0000 UTC m=+722.605920564" observedRunningTime="2026-02-17 00:34:57.018937792 +0000 UTC m=+723.034747200" watchObservedRunningTime="2026-02-17 00:34:57.019567429 +0000 UTC m=+723.035376857" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.006713 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.008478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.011022 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.012066 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.031976 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.110343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhtq\" (UniqueName: \"kubernetes.io/projected/f6b8e9a5-5e51-412a-8e09-fec9b4a96279-kube-api-access-jzhtq\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.110644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.211961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.212043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhtq\" (UniqueName: \"kubernetes.io/projected/f6b8e9a5-5e51-412a-8e09-fec9b4a96279-kube-api-access-jzhtq\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.216454 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.216504 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/006c40e88a8af52669dbc8ab8efbca1681a0924fcf9e0a35d87c3a95e7ef92f2/globalmount\"" pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.249539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhtq\" (UniqueName: \"kubernetes.io/projected/f6b8e9a5-5e51-412a-8e09-fec9b4a96279-kube-api-access-jzhtq\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.253032 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d7792d9-5a0a-418a-9cf0-d2e2e16254e2\") pod \"minio\" (UID: \"f6b8e9a5-5e51-412a-8e09-fec9b4a96279\") " pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.373657 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 00:35:02 crc kubenswrapper[4805]: I0217 00:35:02.877592 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 00:35:03 crc kubenswrapper[4805]: I0217 00:35:03.017595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f6b8e9a5-5e51-412a-8e09-fec9b4a96279","Type":"ContainerStarted","Data":"ec6d803f3ebe598fc0b2ada758d89eeae0bcf4f7e2d03f552108aabcec303edf"} Feb 17 00:35:09 crc kubenswrapper[4805]: I0217 00:35:09.055981 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"f6b8e9a5-5e51-412a-8e09-fec9b4a96279","Type":"ContainerStarted","Data":"2b6372c723df9cd788545ce3a95f46b41aeb337d2f9780ce8c1bde445e769ad5"} Feb 17 00:35:09 crc kubenswrapper[4805]: I0217 00:35:09.077054 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.812672189 podStartE2EDuration="10.077034512s" podCreationTimestamp="2026-02-17 00:34:59 +0000 UTC" firstStartedPulling="2026-02-17 00:35:02.89225525 +0000 UTC m=+728.908064688" lastFinishedPulling="2026-02-17 00:35:08.156617603 +0000 UTC m=+734.172427011" observedRunningTime="2026-02-17 00:35:09.071221724 +0000 UTC m=+735.087031162" watchObservedRunningTime="2026-02-17 00:35:09.077034512 +0000 UTC m=+735.092843930" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.382528 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.384040 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.387349 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.388154 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-wgzdv" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.388360 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.388842 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.390532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.396602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.399222 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.399264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.399297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-config\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.399318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.399392 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6v5\" (UniqueName: \"kubernetes.io/projected/a39490eb-8fc3-40ae-9968-453acf06f5da-kube-api-access-6k6v5\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.501960 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-config\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.501998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.502056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6v5\" (UniqueName: \"kubernetes.io/projected/a39490eb-8fc3-40ae-9968-453acf06f5da-kube-api-access-6k6v5\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.502080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.502105 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.502934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.503619 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a39490eb-8fc3-40ae-9968-453acf06f5da-config\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.510304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.522292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/a39490eb-8fc3-40ae-9968-453acf06f5da-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.523437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6v5\" (UniqueName: \"kubernetes.io/projected/a39490eb-8fc3-40ae-9968-453acf06f5da-kube-api-access-6k6v5\") pod \"logging-loki-distributor-5d5548c9f5-h56f6\" (UID: \"a39490eb-8fc3-40ae-9968-453acf06f5da\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.555659 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jkggq"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.556649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.560924 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.561686 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.563171 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jkggq"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.565280 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.599318 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.600163 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.619847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.620105 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.621832 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-config\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.621886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.621914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzd9\" (UniqueName: \"kubernetes.io/projected/f16ed9b4-0dca-404a-b943-ccb244e680c0-kube-api-access-mwzd9\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.621941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.621970 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.882679 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.883394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.884950 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ds87\" (UniqueName: \"kubernetes.io/projected/b0bcda11-009a-411a-8e27-ea83b6953ef9-kube-api-access-8ds87\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.884996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-config\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-config\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzd9\" (UniqueName: \"kubernetes.io/projected/f16ed9b4-0dca-404a-b943-ccb244e680c0-kube-api-access-mwzd9\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885225 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.885244 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.886384 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.887401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f16ed9b4-0dca-404a-b943-ccb244e680c0-config\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.890129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.907036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f16ed9b4-0dca-404a-b943-ccb244e680c0-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.914374 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzd9\" (UniqueName: \"kubernetes.io/projected/f16ed9b4-0dca-404a-b943-ccb244e680c0-kube-api-access-mwzd9\") pod \"logging-loki-query-frontend-6d6859c548-xv8tz\" (UID: \"f16ed9b4-0dca-404a-b943-ccb244e680c0\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.921720 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.921762 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.928080 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-chpzm"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.928644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.929156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.932175 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.932604 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.932799 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-rkvpj" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.932957 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.933091 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.933390 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.936313 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.945117 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-chpzm"] Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.985833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ds87\" (UniqueName: \"kubernetes.io/projected/b0bcda11-009a-411a-8e27-ea83b6953ef9-kube-api-access-8ds87\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986142 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-rbac\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-config\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.986945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987039 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987698 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987794 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhc6\" (UniqueName: \"kubernetes.io/projected/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-kube-api-access-5jhc6\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.987914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.988026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8r8x\" (UniqueName: \"kubernetes.io/projected/605689df-27a1-4160-b336-40c665824a83-kube-api-access-j8r8x\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.988132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.988242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-rbac\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.988341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tenants\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.988875 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tenants\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.989672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-config\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.989806 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.989862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.989992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.990094 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.994177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:14 crc kubenswrapper[4805]: I0217 00:35:14.995264 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.001411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/b0bcda11-009a-411a-8e27-ea83b6953ef9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.012110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ds87\" (UniqueName: \"kubernetes.io/projected/b0bcda11-009a-411a-8e27-ea83b6953ef9-kube-api-access-8ds87\") pod \"logging-loki-querier-76bf7b6d45-jkggq\" (UID: \"b0bcda11-009a-411a-8e27-ea83b6953ef9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.091876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8r8x\" (UniqueName: \"kubernetes.io/projected/605689df-27a1-4160-b336-40c665824a83-kube-api-access-j8r8x\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092129 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-rbac\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tenants\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tenants\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-rbac\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092371 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092448 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhc6\" (UniqueName: \"kubernetes.io/projected/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-kube-api-access-5jhc6\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.092465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.093228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.094546 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.095211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.094720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-rbac\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.107964 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-lokistack-gateway\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.108836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tenants\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.110412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.111234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.114047 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tenants\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.116496 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8r8x\" (UniqueName: \"kubernetes.io/projected/605689df-27a1-4160-b336-40c665824a83-kube-api-access-j8r8x\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.116735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-tls-secret\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.117096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-logging-loki-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.118215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-rbac\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.118874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.123894 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/605689df-27a1-4160-b336-40c665824a83-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-648db9fc4d-chpzm\" (UID: \"605689df-27a1-4160-b336-40c665824a83\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.134269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhc6\" (UniqueName: \"kubernetes.io/projected/c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359-kube-api-access-5jhc6\") pod \"logging-loki-gateway-648db9fc4d-nsb7r\" (UID: \"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359\") " pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.163044 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.180173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.189050 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.287130 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.297658 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.392724 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-jkggq"] Feb 17 00:35:15 crc kubenswrapper[4805]: W0217 00:35:15.404869 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0bcda11_009a_411a_8e27_ea83b6953ef9.slice/crio-ef7f579e4c9ad8c33f9fc77cc9b175c10fdaeed2fcfdc90bf927ed6ea5779075 WatchSource:0}: Error finding container ef7f579e4c9ad8c33f9fc77cc9b175c10fdaeed2fcfdc90bf927ed6ea5779075: Status 404 returned error can't find the container with id ef7f579e4c9ad8c33f9fc77cc9b175c10fdaeed2fcfdc90bf927ed6ea5779075 Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.441717 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz"] Feb 17 00:35:15 crc kubenswrapper[4805]: W0217 00:35:15.449413 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf16ed9b4_0dca_404a_b943_ccb244e680c0.slice/crio-f199058fec755d4d2914c2ffa3f70fa8bb212bf28ee1236cccfbeea9b003c7bb WatchSource:0}: Error finding container f199058fec755d4d2914c2ffa3f70fa8bb212bf28ee1236cccfbeea9b003c7bb: Status 404 returned error can't find the container with id f199058fec755d4d2914c2ffa3f70fa8bb212bf28ee1236cccfbeea9b003c7bb Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.549945 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.551051 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.553646 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.554373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.558409 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.586169 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.586869 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.591008 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.591485 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.595869 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.677246 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.678011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.680044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.680265 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.698467 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699773 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-config\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5mbb\" (UniqueName: \"kubernetes.io/projected/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-kube-api-access-r5mbb\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6sts\" (UniqueName: \"kubernetes.io/projected/03d9a31a-0121-42e4-a82e-7ee97d31beb1-kube-api-access-w6sts\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.699978 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-config\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700139 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700209 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700233 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.700255 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.725479 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-chpzm"] Feb 17 00:35:15 crc kubenswrapper[4805]: W0217 00:35:15.776210 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5ac6ee2_b17b_4e79_8c1f_5cda68aa4359.slice/crio-9f9bd91edb3de6e83e7df9945eb9f9ca40b93043332d213e7793e3ba989f8bca WatchSource:0}: Error finding container 9f9bd91edb3de6e83e7df9945eb9f9ca40b93043332d213e7793e3ba989f8bca: Status 404 returned error can't find the container with id 9f9bd91edb3de6e83e7df9945eb9f9ca40b93043332d213e7793e3ba989f8bca Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.777393 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r"] Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.801652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.801716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.801748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.801776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9da5e859-b95e-463a-a318-d4ff3f518204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9da5e859-b95e-463a-a318-d4ff3f518204\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802033 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802288 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-config\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszz4\" (UniqueName: \"kubernetes.io/projected/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-kube-api-access-rszz4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802393 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5mbb\" (UniqueName: \"kubernetes.io/projected/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-kube-api-access-r5mbb\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802628 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802639 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6sts\" (UniqueName: \"kubernetes.io/projected/03d9a31a-0121-42e4-a82e-7ee97d31beb1-kube-api-access-w6sts\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.802767 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-config\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.804096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-config\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.804492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-config\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.806054 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.806084 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a9a0f7a79ea1b1c6fb67061f5bbfaa704bc0805b82c2c76a6a4015e256b6903d/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.806896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.807393 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.807422 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a575641899d36a29de54177657aab9d2a15d9fc54e049438e5d961a708621b29/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.807397 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.807887 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/61684d2d84227301e53856f3ded913c530b7191f15f4330fa642bbfbbc4721c6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.808713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.808734 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.811825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.812283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.815137 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/03d9a31a-0121-42e4-a82e-7ee97d31beb1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.820978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.821295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5mbb\" (UniqueName: \"kubernetes.io/projected/b5f0edd5-0fe1-4af9-b5c7-753847dd83c6-kube-api-access-r5mbb\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.825082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6sts\" (UniqueName: \"kubernetes.io/projected/03d9a31a-0121-42e4-a82e-7ee97d31beb1-kube-api-access-w6sts\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.845616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4d2322d6-0e98-44ad-a7c4-b07ba48b6e2a\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.846087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f3ad07ce-c370-4124-b13a-a2f8f75a2069\") pod \"logging-loki-compactor-0\" (UID: \"03d9a31a-0121-42e4-a82e-7ee97d31beb1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.846713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39af2ff9-980e-49bf-a123-a1f1ef00648f\") pod \"logging-loki-ingester-0\" (UID: \"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.899951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9da5e859-b95e-463a-a318-d4ff3f518204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9da5e859-b95e-463a-a318-d4ff3f518204\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903709 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903786 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903819 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszz4\" (UniqueName: \"kubernetes.io/projected/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-kube-api-access-rszz4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.903883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.904665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.906679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.910445 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.913798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.914147 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.917612 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.918282 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.918307 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9da5e859-b95e-463a-a318-d4ff3f518204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9da5e859-b95e-463a-a318-d4ff3f518204\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3929ab29ef2c0d6c162015701bf6ef941a2aa05338a19d32c5039dcba819edcd/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:15 crc kubenswrapper[4805]: I0217 00:35:15.947038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszz4\" (UniqueName: \"kubernetes.io/projected/a4aa5b24-6f45-4330-bda1-89fe3963ea2b-kube-api-access-rszz4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.007076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9da5e859-b95e-463a-a318-d4ff3f518204\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9da5e859-b95e-463a-a318-d4ff3f518204\") pod \"logging-loki-index-gateway-0\" (UID: \"a4aa5b24-6f45-4330-bda1-89fe3963ea2b\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.102237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" event={"ID":"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359","Type":"ContainerStarted","Data":"9f9bd91edb3de6e83e7df9945eb9f9ca40b93043332d213e7793e3ba989f8bca"} Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.103700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" event={"ID":"b0bcda11-009a-411a-8e27-ea83b6953ef9","Type":"ContainerStarted","Data":"ef7f579e4c9ad8c33f9fc77cc9b175c10fdaeed2fcfdc90bf927ed6ea5779075"} Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.105058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" event={"ID":"605689df-27a1-4160-b336-40c665824a83","Type":"ContainerStarted","Data":"3f50a553cd91b3f5f3d0cfb85844dffd64b56fbd08f03842d5446be9281086c7"} Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.106258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" event={"ID":"a39490eb-8fc3-40ae-9968-453acf06f5da","Type":"ContainerStarted","Data":"b18a6145d4fabbf4125fed8b351fa77a8cbeab0531061ab4411132fb9474c6d7"} Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.107566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" event={"ID":"f16ed9b4-0dca-404a-b943-ccb244e680c0","Type":"ContainerStarted","Data":"f199058fec755d4d2914c2ffa3f70fa8bb212bf28ee1236cccfbeea9b003c7bb"} Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.302617 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.352664 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 17 00:35:16 crc kubenswrapper[4805]: W0217 00:35:16.360210 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5f0edd5_0fe1_4af9_b5c7_753847dd83c6.slice/crio-502b8d888bf3f5674bc0d690d0fcf596860fb7a63ef52a77cd89af9ebbf8fcb2 WatchSource:0}: Error finding container 502b8d888bf3f5674bc0d690d0fcf596860fb7a63ef52a77cd89af9ebbf8fcb2: Status 404 returned error can't find the container with id 502b8d888bf3f5674bc0d690d0fcf596860fb7a63ef52a77cd89af9ebbf8fcb2 Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.396654 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 17 00:35:16 crc kubenswrapper[4805]: I0217 00:35:16.527148 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 17 00:35:16 crc kubenswrapper[4805]: W0217 00:35:16.557173 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4aa5b24_6f45_4330_bda1_89fe3963ea2b.slice/crio-21f31f2b5f1088db8d205b6f360899f449dd3453f0e8ec0c1cf0c03f65a54053 WatchSource:0}: Error finding container 21f31f2b5f1088db8d205b6f360899f449dd3453f0e8ec0c1cf0c03f65a54053: Status 404 returned error can't find the container with id 21f31f2b5f1088db8d205b6f360899f449dd3453f0e8ec0c1cf0c03f65a54053 Feb 17 00:35:17 crc kubenswrapper[4805]: I0217 00:35:17.113343 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"03d9a31a-0121-42e4-a82e-7ee97d31beb1","Type":"ContainerStarted","Data":"de2551b03f43969e152ffbe315aa84a4f928d8a00ed58f7578f453a480968c1e"} Feb 17 00:35:17 crc kubenswrapper[4805]: I0217 00:35:17.114973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6","Type":"ContainerStarted","Data":"502b8d888bf3f5674bc0d690d0fcf596860fb7a63ef52a77cd89af9ebbf8fcb2"} Feb 17 00:35:17 crc kubenswrapper[4805]: I0217 00:35:17.115857 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a4aa5b24-6f45-4330-bda1-89fe3963ea2b","Type":"ContainerStarted","Data":"21f31f2b5f1088db8d205b6f360899f449dd3453f0e8ec0c1cf0c03f65a54053"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.132575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" event={"ID":"b0bcda11-009a-411a-8e27-ea83b6953ef9","Type":"ContainerStarted","Data":"ecdd0a73fb11a515cce74b897b886ef817023a37bd650dec704152d148c67999"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.132963 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.137718 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a4aa5b24-6f45-4330-bda1-89fe3963ea2b","Type":"ContainerStarted","Data":"9e285b880f6b3e39a4f41467e40bc732cacc2ef9df60bd25ae2e6ff39d65f04e"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.137752 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.144907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" event={"ID":"605689df-27a1-4160-b336-40c665824a83","Type":"ContainerStarted","Data":"379a304f4c95e113f4ca6652eb524d3f22eb768531866b0a053803d4be2539b6"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.148643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" event={"ID":"a39490eb-8fc3-40ae-9968-453acf06f5da","Type":"ContainerStarted","Data":"0f43ff1a06f8badde0eb3ff3961964c3d750ab4957dc1100f83ae921c71e4d91"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.148775 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.152835 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" podStartSLOduration=1.832336995 podStartE2EDuration="5.152816786s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:15.406464729 +0000 UTC m=+741.422274127" lastFinishedPulling="2026-02-17 00:35:18.7269445 +0000 UTC m=+744.742753918" observedRunningTime="2026-02-17 00:35:19.150093592 +0000 UTC m=+745.165902990" watchObservedRunningTime="2026-02-17 00:35:19.152816786 +0000 UTC m=+745.168626184" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.162606 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.164278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"b5f0edd5-0fe1-4af9-b5c7-753847dd83c6","Type":"ContainerStarted","Data":"9340d132d6c7232e5d48f631d04bf28a94de2b3a38e7f48070cdbe6d86fe265b"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.164364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.165663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" event={"ID":"f16ed9b4-0dca-404a-b943-ccb244e680c0","Type":"ContainerStarted","Data":"7ab66d50efe2e86af1968d7aa476de8aa8e2d78b4d3eb30f45cd11581ca4e9a5"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.165703 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.170064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" event={"ID":"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359","Type":"ContainerStarted","Data":"ed34fdc4ca0f7241b998712d9aa6865c8b053ca6494a9790f408123682b5033f"} Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.174586 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.836189873 podStartE2EDuration="5.174560918s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:16.560212755 +0000 UTC m=+742.576022163" lastFinishedPulling="2026-02-17 00:35:18.89858381 +0000 UTC m=+744.914393208" observedRunningTime="2026-02-17 00:35:19.172292846 +0000 UTC m=+745.188102274" watchObservedRunningTime="2026-02-17 00:35:19.174560918 +0000 UTC m=+745.190370346" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.200310 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" podStartSLOduration=1.494475113 podStartE2EDuration="5.200293327s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:15.169711709 +0000 UTC m=+741.185521117" lastFinishedPulling="2026-02-17 00:35:18.875529923 +0000 UTC m=+744.891339331" observedRunningTime="2026-02-17 00:35:19.199210758 +0000 UTC m=+745.215020156" watchObservedRunningTime="2026-02-17 00:35:19.200293327 +0000 UTC m=+745.216102725" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.233438 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" podStartSLOduration=1.831263386 podStartE2EDuration="5.233424539s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:15.451153035 +0000 UTC m=+741.466962433" lastFinishedPulling="2026-02-17 00:35:18.853314188 +0000 UTC m=+744.869123586" observedRunningTime="2026-02-17 00:35:19.231919298 +0000 UTC m=+745.247728696" watchObservedRunningTime="2026-02-17 00:35:19.233424539 +0000 UTC m=+745.249233937" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.234244 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.74196057 podStartE2EDuration="5.234239451s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:16.427963778 +0000 UTC m=+742.443773176" lastFinishedPulling="2026-02-17 00:35:18.920242659 +0000 UTC m=+744.936052057" observedRunningTime="2026-02-17 00:35:19.217849415 +0000 UTC m=+745.233658813" watchObservedRunningTime="2026-02-17 00:35:19.234239451 +0000 UTC m=+745.250048849" Feb 17 00:35:19 crc kubenswrapper[4805]: I0217 00:35:19.258467 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.7350582230000002 podStartE2EDuration="5.25845228s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:16.362859567 +0000 UTC m=+742.378668955" lastFinishedPulling="2026-02-17 00:35:18.886253614 +0000 UTC m=+744.902063012" observedRunningTime="2026-02-17 00:35:19.256224659 +0000 UTC m=+745.272034057" watchObservedRunningTime="2026-02-17 00:35:19.25845228 +0000 UTC m=+745.274261678" Feb 17 00:35:20 crc kubenswrapper[4805]: I0217 00:35:20.179259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"03d9a31a-0121-42e4-a82e-7ee97d31beb1","Type":"ContainerStarted","Data":"7d5a11cc473347c1d307951b7db592464d5465b951844daefdd3fa9cb89a4273"} Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.203902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" event={"ID":"c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359","Type":"ContainerStarted","Data":"f4e6042c950799f46b6ff39f75a218f8a644d9c4c81999abc7f7f3ef27e628c9"} Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.204399 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.204428 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.208488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" event={"ID":"605689df-27a1-4160-b336-40c665824a83","Type":"ContainerStarted","Data":"85f25725704733fbb0d74330016d2070ed44d974b2a3898b64bbe433309d6a43"} Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.209682 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.209737 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.225304 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.232181 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.232959 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.234357 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" Feb 17 00:35:22 crc kubenswrapper[4805]: I0217 00:35:22.253015 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-648db9fc4d-nsb7r" podStartSLOduration=2.877995551 podStartE2EDuration="8.252992033s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:15.779070646 +0000 UTC m=+741.794880044" lastFinishedPulling="2026-02-17 00:35:21.154067118 +0000 UTC m=+747.169876526" observedRunningTime="2026-02-17 00:35:22.244900033 +0000 UTC m=+748.260709471" watchObservedRunningTime="2026-02-17 00:35:22.252992033 +0000 UTC m=+748.268801441" Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.077122 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.077562 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.077625 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.078518 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.078625 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471" gracePeriod=600 Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.223218 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471" exitCode=0 Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.224429 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471"} Feb 17 00:35:23 crc kubenswrapper[4805]: I0217 00:35:23.224491 4805 scope.go:117] "RemoveContainer" containerID="a1d4cf0710e2c345e6ab83fff28c000c6465bd6ba78c6d4223f43eb52bfaa7ec" Feb 17 00:35:24 crc kubenswrapper[4805]: I0217 00:35:24.234090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945"} Feb 17 00:35:24 crc kubenswrapper[4805]: I0217 00:35:24.262518 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-648db9fc4d-chpzm" podStartSLOduration=4.831438553 podStartE2EDuration="10.26249199s" podCreationTimestamp="2026-02-17 00:35:14 +0000 UTC" firstStartedPulling="2026-02-17 00:35:15.719478435 +0000 UTC m=+741.735287833" lastFinishedPulling="2026-02-17 00:35:21.150531882 +0000 UTC m=+747.166341270" observedRunningTime="2026-02-17 00:35:22.315187336 +0000 UTC m=+748.330996724" watchObservedRunningTime="2026-02-17 00:35:24.26249199 +0000 UTC m=+750.278301428" Feb 17 00:35:30 crc kubenswrapper[4805]: I0217 00:35:30.058098 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:35:34 crc kubenswrapper[4805]: I0217 00:35:34.893000 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-h56f6" Feb 17 00:35:35 crc kubenswrapper[4805]: I0217 00:35:35.187509 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-jkggq" Feb 17 00:35:35 crc kubenswrapper[4805]: I0217 00:35:35.201772 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-xv8tz" Feb 17 00:35:35 crc kubenswrapper[4805]: I0217 00:35:35.908350 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 00:35:35 crc kubenswrapper[4805]: I0217 00:35:35.908429 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b5f0edd5-0fe1-4af9-b5c7-753847dd83c6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 00:35:35 crc kubenswrapper[4805]: I0217 00:35:35.928664 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 17 00:35:36 crc kubenswrapper[4805]: I0217 00:35:36.312115 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 17 00:35:45 crc kubenswrapper[4805]: I0217 00:35:45.908680 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 17 00:35:45 crc kubenswrapper[4805]: I0217 00:35:45.909374 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b5f0edd5-0fe1-4af9-b5c7-753847dd83c6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 00:35:55 crc kubenswrapper[4805]: I0217 00:35:55.906134 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 00:35:55 crc kubenswrapper[4805]: I0217 00:35:55.906669 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b5f0edd5-0fe1-4af9-b5c7-753847dd83c6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 00:36:05 crc kubenswrapper[4805]: I0217 00:36:05.904959 4805 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 17 00:36:05 crc kubenswrapper[4805]: I0217 00:36:05.905585 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="b5f0edd5-0fe1-4af9-b5c7-753847dd83c6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 00:36:15 crc kubenswrapper[4805]: I0217 00:36:15.907896 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.696170 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-snh7s"] Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.700597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.709296 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.709454 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.709513 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.709750 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-5xprw" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.721476 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.722937 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.723772 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-snh7s"] Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.769624 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-snh7s"] Feb 17 00:36:33 crc kubenswrapper[4805]: E0217 00:36:33.770095 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-b9rzr metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-b9rzr metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-snh7s" podUID="e868ebd4-31bb-4137-a754-8fcf5bc9d261" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.799304 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.805652 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.878452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.878514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.878556 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.878595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.879043 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.879410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.879603 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9rzr\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.879870 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.880007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.880108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.880291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.980923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9rzr\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.980999 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981047 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981162 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981354 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.981580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.982283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.982898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.983001 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.983650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.987719 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.987898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:33 crc kubenswrapper[4805]: I0217 00:36:33.987986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.003164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.005764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9rzr\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.009300 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token\") pod \"collector-snh7s\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " pod="openshift-logging/collector-snh7s" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082459 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082496 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082527 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082564 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082589 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9rzr\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082627 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint\") pod \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\" (UID: \"e868ebd4-31bb-4137-a754-8fcf5bc9d261\") " Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.082542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir" (OuterVolumeSpecName: "datadir") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.083456 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.083616 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.083622 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.084469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config" (OuterVolumeSpecName: "config") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.086239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.086637 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp" (OuterVolumeSpecName: "tmp") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.087033 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token" (OuterVolumeSpecName: "collector-token") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.087536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics" (OuterVolumeSpecName: "metrics") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.088292 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr" (OuterVolumeSpecName: "kube-api-access-b9rzr") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "kube-api-access-b9rzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.088946 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token" (OuterVolumeSpecName: "sa-token") pod "e868ebd4-31bb-4137-a754-8fcf5bc9d261" (UID: "e868ebd4-31bb-4137-a754-8fcf5bc9d261"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184730 4805 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/e868ebd4-31bb-4137-a754-8fcf5bc9d261-datadir\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184779 4805 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184804 4805 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184823 4805 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e868ebd4-31bb-4137-a754-8fcf5bc9d261-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184843 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184862 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184923 4805 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184940 4805 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/e868ebd4-31bb-4137-a754-8fcf5bc9d261-collector-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9rzr\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-kube-api-access-b9rzr\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.184976 4805 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/e868ebd4-31bb-4137-a754-8fcf5bc9d261-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.185020 4805 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/e868ebd4-31bb-4137-a754-8fcf5bc9d261-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.808863 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-snh7s" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.913699 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-snh7s"] Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.920153 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-snh7s"] Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.928838 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ncz6q"] Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.930165 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.933100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-5xprw" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.935255 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ncz6q"] Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.937416 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.938138 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.938922 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.939359 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.945177 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-entrypoint\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999836 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999853 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config-openshift-service-cacrt\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999873 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-syslog-receiver\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-metrics\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999902 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-sa-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:34 crc kubenswrapper[4805]: I0217 00:36:34.999940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-trusted-ca\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.000156 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fddfe695-106b-4180-b8bb-57ad148b8a6d-datadir\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.000242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.000300 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kp79\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-kube-api-access-2kp79\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.000362 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fddfe695-106b-4180-b8bb-57ad148b8a6d-tmp\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kp79\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-kube-api-access-2kp79\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101285 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fddfe695-106b-4180-b8bb-57ad148b8a6d-tmp\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101473 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-entrypoint\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.101894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config-openshift-service-cacrt\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-syslog-receiver\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102107 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-metrics\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-sa-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-trusted-ca\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fddfe695-106b-4180-b8bb-57ad148b8a6d-datadir\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.102396 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fddfe695-106b-4180-b8bb-57ad148b8a6d-datadir\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.105549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.106265 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-entrypoint\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.106761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-trusted-ca\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.107422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fddfe695-106b-4180-b8bb-57ad148b8a6d-config-openshift-service-cacrt\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.108379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fddfe695-106b-4180-b8bb-57ad148b8a6d-tmp\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.111140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-syslog-receiver\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.118234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-collector-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.119960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fddfe695-106b-4180-b8bb-57ad148b8a6d-metrics\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.131022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-sa-token\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.131498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kp79\" (UniqueName: \"kubernetes.io/projected/fddfe695-106b-4180-b8bb-57ad148b8a6d-kube-api-access-2kp79\") pod \"collector-ncz6q\" (UID: \"fddfe695-106b-4180-b8bb-57ad148b8a6d\") " pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.264554 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ncz6q" Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.786969 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ncz6q"] Feb 17 00:36:35 crc kubenswrapper[4805]: I0217 00:36:35.818245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ncz6q" event={"ID":"fddfe695-106b-4180-b8bb-57ad148b8a6d","Type":"ContainerStarted","Data":"77085b9e57d392f2525ea8c1b8703793a5accca7eec73f1f20920e17be55a775"} Feb 17 00:36:36 crc kubenswrapper[4805]: I0217 00:36:36.801187 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e868ebd4-31bb-4137-a754-8fcf5bc9d261" path="/var/lib/kubelet/pods/e868ebd4-31bb-4137-a754-8fcf5bc9d261/volumes" Feb 17 00:36:41 crc kubenswrapper[4805]: I0217 00:36:41.867510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ncz6q" event={"ID":"fddfe695-106b-4180-b8bb-57ad148b8a6d","Type":"ContainerStarted","Data":"4495b49405cf04b8e4cb9d5faf9c69db01a422d33f3d7defd1293e153037a30e"} Feb 17 00:36:41 crc kubenswrapper[4805]: I0217 00:36:41.906671 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ncz6q" podStartSLOduration=2.6211466530000003 podStartE2EDuration="7.906647512s" podCreationTimestamp="2026-02-17 00:36:34 +0000 UTC" firstStartedPulling="2026-02-17 00:36:35.80162733 +0000 UTC m=+821.817436768" lastFinishedPulling="2026-02-17 00:36:41.087128219 +0000 UTC m=+827.102937627" observedRunningTime="2026-02-17 00:36:41.897587527 +0000 UTC m=+827.913396955" watchObservedRunningTime="2026-02-17 00:36:41.906647512 +0000 UTC m=+827.922456950" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.347375 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c"] Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.348958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.352402 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.366238 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c"] Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.400422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.400489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl62\" (UniqueName: \"kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.400531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.502195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.502619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl62\" (UniqueName: \"kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.502659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.502821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.503158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.531417 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl62\" (UniqueName: \"kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:06 crc kubenswrapper[4805]: I0217 00:37:06.752189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:07 crc kubenswrapper[4805]: I0217 00:37:07.001347 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c"] Feb 17 00:37:07 crc kubenswrapper[4805]: I0217 00:37:07.069577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" event={"ID":"0b16c780-85de-4448-9515-790e38240412","Type":"ContainerStarted","Data":"bfaeb23bf4a4d0aa9849599f17cb0f95ef851d340a99947d47d4a47dfe545cb1"} Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.079618 4805 generic.go:334] "Generic (PLEG): container finished" podID="0b16c780-85de-4448-9515-790e38240412" containerID="da9102c4e9b3e1551a7cf44201d6c0a1a73a7694ccede1ebb20dc631190bb8f0" exitCode=0 Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.080119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" event={"ID":"0b16c780-85de-4448-9515-790e38240412","Type":"ContainerDied","Data":"da9102c4e9b3e1551a7cf44201d6c0a1a73a7694ccede1ebb20dc631190bb8f0"} Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.682713 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.684310 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.695957 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.738945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.739029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.739133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvxh\" (UniqueName: \"kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.840288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvxh\" (UniqueName: \"kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.840396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.840439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.840903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.840958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:08 crc kubenswrapper[4805]: I0217 00:37:08.860587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvxh\" (UniqueName: \"kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh\") pod \"redhat-operators-4r8vh\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:09 crc kubenswrapper[4805]: I0217 00:37:09.047203 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:09 crc kubenswrapper[4805]: I0217 00:37:09.592016 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:09 crc kubenswrapper[4805]: W0217 00:37:09.599786 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35e2f514_2f4e_48ec_9d7d_8e0fefccfdfa.slice/crio-67a5e5414d3ae8f60f501a3d3f6664197f793dfdbe7f054651a36b8683d7a6df WatchSource:0}: Error finding container 67a5e5414d3ae8f60f501a3d3f6664197f793dfdbe7f054651a36b8683d7a6df: Status 404 returned error can't find the container with id 67a5e5414d3ae8f60f501a3d3f6664197f793dfdbe7f054651a36b8683d7a6df Feb 17 00:37:10 crc kubenswrapper[4805]: I0217 00:37:10.095748 4805 generic.go:334] "Generic (PLEG): container finished" podID="0b16c780-85de-4448-9515-790e38240412" containerID="42cfebfd4999ce3b2f16780e14c5394b7931490a674abc9bfd8459c2a6d6b000" exitCode=0 Feb 17 00:37:10 crc kubenswrapper[4805]: I0217 00:37:10.095790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" event={"ID":"0b16c780-85de-4448-9515-790e38240412","Type":"ContainerDied","Data":"42cfebfd4999ce3b2f16780e14c5394b7931490a674abc9bfd8459c2a6d6b000"} Feb 17 00:37:10 crc kubenswrapper[4805]: I0217 00:37:10.097687 4805 generic.go:334] "Generic (PLEG): container finished" podID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerID="75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41" exitCode=0 Feb 17 00:37:10 crc kubenswrapper[4805]: I0217 00:37:10.097726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerDied","Data":"75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41"} Feb 17 00:37:10 crc kubenswrapper[4805]: I0217 00:37:10.097768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerStarted","Data":"67a5e5414d3ae8f60f501a3d3f6664197f793dfdbe7f054651a36b8683d7a6df"} Feb 17 00:37:11 crc kubenswrapper[4805]: I0217 00:37:11.108471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerStarted","Data":"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256"} Feb 17 00:37:11 crc kubenswrapper[4805]: I0217 00:37:11.112882 4805 generic.go:334] "Generic (PLEG): container finished" podID="0b16c780-85de-4448-9515-790e38240412" containerID="f1bebe021e94c48dd32b6b315db1ea8407e7a3693973572eaa818f444dcfcc14" exitCode=0 Feb 17 00:37:11 crc kubenswrapper[4805]: I0217 00:37:11.112945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" event={"ID":"0b16c780-85de-4448-9515-790e38240412","Type":"ContainerDied","Data":"f1bebe021e94c48dd32b6b315db1ea8407e7a3693973572eaa818f444dcfcc14"} Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.124046 4805 generic.go:334] "Generic (PLEG): container finished" podID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerID="c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256" exitCode=0 Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.124158 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerDied","Data":"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256"} Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.389081 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.415742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngl62\" (UniqueName: \"kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62\") pod \"0b16c780-85de-4448-9515-790e38240412\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.415794 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util\") pod \"0b16c780-85de-4448-9515-790e38240412\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.415842 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle\") pod \"0b16c780-85de-4448-9515-790e38240412\" (UID: \"0b16c780-85de-4448-9515-790e38240412\") " Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.416663 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle" (OuterVolumeSpecName: "bundle") pod "0b16c780-85de-4448-9515-790e38240412" (UID: "0b16c780-85de-4448-9515-790e38240412"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.420836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62" (OuterVolumeSpecName: "kube-api-access-ngl62") pod "0b16c780-85de-4448-9515-790e38240412" (UID: "0b16c780-85de-4448-9515-790e38240412"). InnerVolumeSpecName "kube-api-access-ngl62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.445671 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util" (OuterVolumeSpecName: "util") pod "0b16c780-85de-4448-9515-790e38240412" (UID: "0b16c780-85de-4448-9515-790e38240412"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.517625 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngl62\" (UniqueName: \"kubernetes.io/projected/0b16c780-85de-4448-9515-790e38240412-kube-api-access-ngl62\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.517651 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:12 crc kubenswrapper[4805]: I0217 00:37:12.517660 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b16c780-85de-4448-9515-790e38240412-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:13 crc kubenswrapper[4805]: I0217 00:37:13.131190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerStarted","Data":"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2"} Feb 17 00:37:13 crc kubenswrapper[4805]: I0217 00:37:13.134319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" event={"ID":"0b16c780-85de-4448-9515-790e38240412","Type":"ContainerDied","Data":"bfaeb23bf4a4d0aa9849599f17cb0f95ef851d340a99947d47d4a47dfe545cb1"} Feb 17 00:37:13 crc kubenswrapper[4805]: I0217 00:37:13.134384 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfaeb23bf4a4d0aa9849599f17cb0f95ef851d340a99947d47d4a47dfe545cb1" Feb 17 00:37:13 crc kubenswrapper[4805]: I0217 00:37:13.134422 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c" Feb 17 00:37:13 crc kubenswrapper[4805]: I0217 00:37:13.151529 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4r8vh" podStartSLOduration=2.714329305 podStartE2EDuration="5.151513016s" podCreationTimestamp="2026-02-17 00:37:08 +0000 UTC" firstStartedPulling="2026-02-17 00:37:10.099237488 +0000 UTC m=+856.115046896" lastFinishedPulling="2026-02-17 00:37:12.536421209 +0000 UTC m=+858.552230607" observedRunningTime="2026-02-17 00:37:13.149602954 +0000 UTC m=+859.165412362" watchObservedRunningTime="2026-02-17 00:37:13.151513016 +0000 UTC m=+859.167322414" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.748378 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-chqqc"] Feb 17 00:37:17 crc kubenswrapper[4805]: E0217 00:37:17.748870 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="util" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.748882 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="util" Feb 17 00:37:17 crc kubenswrapper[4805]: E0217 00:37:17.748893 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="extract" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.748899 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="extract" Feb 17 00:37:17 crc kubenswrapper[4805]: E0217 00:37:17.748916 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="pull" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.748922 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="pull" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.749035 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b16c780-85de-4448-9515-790e38240412" containerName="extract" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.749464 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.751189 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-w47gz" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.751385 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.751623 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.766199 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-chqqc"] Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.789834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwwtf\" (UniqueName: \"kubernetes.io/projected/a78196b5-495a-412c-b5fb-a1905e5fbeff-kube-api-access-rwwtf\") pod \"nmstate-operator-694c9596b7-chqqc\" (UID: \"a78196b5-495a-412c-b5fb-a1905e5fbeff\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.891615 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwwtf\" (UniqueName: \"kubernetes.io/projected/a78196b5-495a-412c-b5fb-a1905e5fbeff-kube-api-access-rwwtf\") pod \"nmstate-operator-694c9596b7-chqqc\" (UID: \"a78196b5-495a-412c-b5fb-a1905e5fbeff\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" Feb 17 00:37:17 crc kubenswrapper[4805]: I0217 00:37:17.913165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwwtf\" (UniqueName: \"kubernetes.io/projected/a78196b5-495a-412c-b5fb-a1905e5fbeff-kube-api-access-rwwtf\") pod \"nmstate-operator-694c9596b7-chqqc\" (UID: \"a78196b5-495a-412c-b5fb-a1905e5fbeff\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" Feb 17 00:37:18 crc kubenswrapper[4805]: I0217 00:37:18.070994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" Feb 17 00:37:18 crc kubenswrapper[4805]: I0217 00:37:18.530593 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-chqqc"] Feb 17 00:37:19 crc kubenswrapper[4805]: I0217 00:37:19.047874 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:19 crc kubenswrapper[4805]: I0217 00:37:19.047921 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:19 crc kubenswrapper[4805]: I0217 00:37:19.219783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" event={"ID":"a78196b5-495a-412c-b5fb-a1905e5fbeff","Type":"ContainerStarted","Data":"502c7ccee49461fe6e189f1db38cc6e33732e3b3cc60b8537072874415133ad5"} Feb 17 00:37:20 crc kubenswrapper[4805]: I0217 00:37:20.111014 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4r8vh" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="registry-server" probeResult="failure" output=< Feb 17 00:37:20 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 00:37:20 crc kubenswrapper[4805]: > Feb 17 00:37:21 crc kubenswrapper[4805]: I0217 00:37:21.235018 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" event={"ID":"a78196b5-495a-412c-b5fb-a1905e5fbeff","Type":"ContainerStarted","Data":"ae0cca8703aed6b5e61aff86f60723ff40d1167b7d5d126d334f28792c8c71b3"} Feb 17 00:37:21 crc kubenswrapper[4805]: I0217 00:37:21.258078 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-chqqc" podStartSLOduration=2.166463085 podStartE2EDuration="4.25805566s" podCreationTimestamp="2026-02-17 00:37:17 +0000 UTC" firstStartedPulling="2026-02-17 00:37:18.550185194 +0000 UTC m=+864.565994582" lastFinishedPulling="2026-02-17 00:37:20.641777759 +0000 UTC m=+866.657587157" observedRunningTime="2026-02-17 00:37:21.250426511 +0000 UTC m=+867.266235919" watchObservedRunningTime="2026-02-17 00:37:21.25805566 +0000 UTC m=+867.273865058" Feb 17 00:37:23 crc kubenswrapper[4805]: I0217 00:37:23.076996 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:37:23 crc kubenswrapper[4805]: I0217 00:37:23.077053 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.515200 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7lswf"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.516851 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.524012 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9hspc" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.525206 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.526344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.529709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7lswf"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.532764 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.541169 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-j2dnr"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.542216 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.547441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.550594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvn9\" (UniqueName: \"kubernetes.io/projected/3864820c-89a0-409c-84a6-7b4145026b77-kube-api-access-snvn9\") pod \"nmstate-metrics-58c85c668d-7lswf\" (UID: \"3864820c-89a0-409c-84a6-7b4145026b77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.647296 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.648197 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651319 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7t9g\" (UniqueName: \"kubernetes.io/projected/cb306405-b68c-4891-a537-df576d06ea6f-kube-api-access-q7t9g\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvn9\" (UniqueName: \"kubernetes.io/projected/3864820c-89a0-409c-84a6-7b4145026b77-kube-api-access-snvn9\") pod \"nmstate-metrics-58c85c668d-7lswf\" (UID: \"3864820c-89a0-409c-84a6-7b4145026b77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khjz\" (UniqueName: \"kubernetes.io/projected/35950c0f-8c05-4840-b6cb-7b61fd07008d-kube-api-access-6khjz\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-ovs-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-dbus-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.651534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-nmstate-lock\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.660219 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wscvz" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.660416 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.660445 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.678410 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.687905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvn9\" (UniqueName: \"kubernetes.io/projected/3864820c-89a0-409c-84a6-7b4145026b77-kube-api-access-snvn9\") pod \"nmstate-metrics-58c85c668d-7lswf\" (UID: \"3864820c-89a0-409c-84a6-7b4145026b77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756067 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfngs\" (UniqueName: \"kubernetes.io/projected/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-kube-api-access-mfngs\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7t9g\" (UniqueName: \"kubernetes.io/projected/cb306405-b68c-4891-a537-df576d06ea6f-kube-api-access-q7t9g\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khjz\" (UniqueName: \"kubernetes.io/projected/35950c0f-8c05-4840-b6cb-7b61fd07008d-kube-api-access-6khjz\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-ovs-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756232 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-dbus-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756365 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756393 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-nmstate-lock\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-nmstate-lock\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.756966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-ovs-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.757240 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35950c0f-8c05-4840-b6cb-7b61fd07008d-dbus-socket\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: E0217 00:37:27.757336 4805 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 17 00:37:27 crc kubenswrapper[4805]: E0217 00:37:27.757421 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair podName:cb306405-b68c-4891-a537-df576d06ea6f nodeName:}" failed. No retries permitted until 2026-02-17 00:37:28.257402172 +0000 UTC m=+874.273211570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair") pod "nmstate-webhook-866bcb46dc-m5d7x" (UID: "cb306405-b68c-4891-a537-df576d06ea6f") : secret "openshift-nmstate-webhook" not found Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.785215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7t9g\" (UniqueName: \"kubernetes.io/projected/cb306405-b68c-4891-a537-df576d06ea6f-kube-api-access-q7t9g\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.785252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khjz\" (UniqueName: \"kubernetes.io/projected/35950c0f-8c05-4840-b6cb-7b61fd07008d-kube-api-access-6khjz\") pod \"nmstate-handler-j2dnr\" (UID: \"35950c0f-8c05-4840-b6cb-7b61fd07008d\") " pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.837275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.858288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.858714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfngs\" (UniqueName: \"kubernetes.io/projected/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-kube-api-access-mfngs\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.858764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.858938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:27 crc kubenswrapper[4805]: E0217 00:37:27.860497 4805 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 00:37:27 crc kubenswrapper[4805]: E0217 00:37:27.860581 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert podName:3a4aeea4-aa38-45c9-9aaa-13670a1602fe nodeName:}" failed. No retries permitted until 2026-02-17 00:37:28.360557692 +0000 UTC m=+874.376367090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-ww84p" (UID: "3a4aeea4-aa38-45c9-9aaa-13670a1602fe") : secret "plugin-serving-cert" not found Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.860866 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.890397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfngs\" (UniqueName: \"kubernetes.io/projected/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-kube-api-access-mfngs\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.972304 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.973192 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:27 crc kubenswrapper[4805]: I0217 00:37:27.993780 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.060986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061036 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgmb\" (UniqueName: \"kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.061237 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgmb\" (UniqueName: \"kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162597 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.162616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.163389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.163456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.163606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.163649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.167228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.168023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.180252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgmb\" (UniqueName: \"kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb\") pod \"console-68cc555589-d9q87\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.263674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.266959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cb306405-b68c-4891-a537-df576d06ea6f-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-m5d7x\" (UID: \"cb306405-b68c-4891-a537-df576d06ea6f\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.282796 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j2dnr" event={"ID":"35950c0f-8c05-4840-b6cb-7b61fd07008d","Type":"ContainerStarted","Data":"0cb15f972e9e7f974ec6d58500f5119a9e0c11a07e5039a31a837466dacf7dd3"} Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.307280 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.366467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.370430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4aeea4-aa38-45c9-9aaa-13670a1602fe-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-ww84p\" (UID: \"3a4aeea4-aa38-45c9-9aaa-13670a1602fe\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.373614 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7lswf"] Feb 17 00:37:28 crc kubenswrapper[4805]: W0217 00:37:28.378824 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3864820c_89a0_409c_84a6_7b4145026b77.slice/crio-eb268eea79d60382490bad1923f122e2b327d55e097fc87031a4ea78864fc558 WatchSource:0}: Error finding container eb268eea79d60382490bad1923f122e2b327d55e097fc87031a4ea78864fc558: Status 404 returned error can't find the container with id eb268eea79d60382490bad1923f122e2b327d55e097fc87031a4ea78864fc558 Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.447116 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.564538 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.723339 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:37:28 crc kubenswrapper[4805]: W0217 00:37:28.734385 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706bb0a5_075b_4a4e_93b1_ca1da7c16756.slice/crio-ed2f53cfe465b2aa306b4ff38645378653f0d726d2a2f11cdf88a7a454242b9f WatchSource:0}: Error finding container ed2f53cfe465b2aa306b4ff38645378653f0d726d2a2f11cdf88a7a454242b9f: Status 404 returned error can't find the container with id ed2f53cfe465b2aa306b4ff38645378653f0d726d2a2f11cdf88a7a454242b9f Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.851351 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x"] Feb 17 00:37:28 crc kubenswrapper[4805]: W0217 00:37:28.859285 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb306405_b68c_4891_a537_df576d06ea6f.slice/crio-c1a4a8c98180cb9388425a5916b949131b8d0c37b94c2ea80bddc178027e16a0 WatchSource:0}: Error finding container c1a4a8c98180cb9388425a5916b949131b8d0c37b94c2ea80bddc178027e16a0: Status 404 returned error can't find the container with id c1a4a8c98180cb9388425a5916b949131b8d0c37b94c2ea80bddc178027e16a0 Feb 17 00:37:28 crc kubenswrapper[4805]: I0217 00:37:28.967798 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p"] Feb 17 00:37:28 crc kubenswrapper[4805]: W0217 00:37:28.977243 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a4aeea4_aa38_45c9_9aaa_13670a1602fe.slice/crio-2690f85f4c1bdfa2030096071322d26a686bbc7b387453e94d295499f2095092 WatchSource:0}: Error finding container 2690f85f4c1bdfa2030096071322d26a686bbc7b387453e94d295499f2095092: Status 404 returned error can't find the container with id 2690f85f4c1bdfa2030096071322d26a686bbc7b387453e94d295499f2095092 Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.101295 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.151952 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.290512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" event={"ID":"3864820c-89a0-409c-84a6-7b4145026b77","Type":"ContainerStarted","Data":"eb268eea79d60382490bad1923f122e2b327d55e097fc87031a4ea78864fc558"} Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.291925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-d9q87" event={"ID":"706bb0a5-075b-4a4e-93b1-ca1da7c16756","Type":"ContainerStarted","Data":"91906243153f670bf0b208f90581174f31e99303492bb1dbf70ae40a2be7395f"} Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.291948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-d9q87" event={"ID":"706bb0a5-075b-4a4e-93b1-ca1da7c16756","Type":"ContainerStarted","Data":"ed2f53cfe465b2aa306b4ff38645378653f0d726d2a2f11cdf88a7a454242b9f"} Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.293681 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" event={"ID":"cb306405-b68c-4891-a537-df576d06ea6f","Type":"ContainerStarted","Data":"c1a4a8c98180cb9388425a5916b949131b8d0c37b94c2ea80bddc178027e16a0"} Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.295140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" event={"ID":"3a4aeea4-aa38-45c9-9aaa-13670a1602fe","Type":"ContainerStarted","Data":"2690f85f4c1bdfa2030096071322d26a686bbc7b387453e94d295499f2095092"} Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.311297 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68cc555589-d9q87" podStartSLOduration=2.311275814 podStartE2EDuration="2.311275814s" podCreationTimestamp="2026-02-17 00:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:37:29.30749963 +0000 UTC m=+875.323309028" watchObservedRunningTime="2026-02-17 00:37:29.311275814 +0000 UTC m=+875.327085212" Feb 17 00:37:29 crc kubenswrapper[4805]: I0217 00:37:29.332273 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:30 crc kubenswrapper[4805]: I0217 00:37:30.301542 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4r8vh" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="registry-server" containerID="cri-o://191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2" gracePeriod=2 Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.003199 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.124250 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bvxh\" (UniqueName: \"kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh\") pod \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.124311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities\") pod \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.124412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content\") pod \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\" (UID: \"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa\") " Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.126148 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities" (OuterVolumeSpecName: "utilities") pod "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" (UID: "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.132014 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh" (OuterVolumeSpecName: "kube-api-access-5bvxh") pod "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" (UID: "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa"). InnerVolumeSpecName "kube-api-access-5bvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.225991 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bvxh\" (UniqueName: \"kubernetes.io/projected/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-kube-api-access-5bvxh\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.226032 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.247652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" (UID: "35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.310820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" event={"ID":"3864820c-89a0-409c-84a6-7b4145026b77","Type":"ContainerStarted","Data":"09e846555dbeb8758ee219713306b5089ddd4881f9d7bb57f47e5ab07dc91917"} Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.312265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" event={"ID":"cb306405-b68c-4891-a537-df576d06ea6f","Type":"ContainerStarted","Data":"3a8d3bb8372dcecdabda9924a90549a3520ae56b3b976e2dda36cd366da62ea6"} Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.313086 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.315551 4805 generic.go:334] "Generic (PLEG): container finished" podID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerID="191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2" exitCode=0 Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.315611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerDied","Data":"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2"} Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.315640 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r8vh" event={"ID":"35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa","Type":"ContainerDied","Data":"67a5e5414d3ae8f60f501a3d3f6664197f793dfdbe7f054651a36b8683d7a6df"} Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.315655 4805 scope.go:117] "RemoveContainer" containerID="191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.315784 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r8vh" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.318756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j2dnr" event={"ID":"35950c0f-8c05-4840-b6cb-7b61fd07008d","Type":"ContainerStarted","Data":"6b3c7ec0c21b17b8d6d39bc42488fd0a890d047c97eaa14f7e4032b314dc6f8e"} Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.318920 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.327016 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.343184 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" podStartSLOduration=2.449601996 podStartE2EDuration="4.343169556s" podCreationTimestamp="2026-02-17 00:37:27 +0000 UTC" firstStartedPulling="2026-02-17 00:37:28.86162373 +0000 UTC m=+874.877433128" lastFinishedPulling="2026-02-17 00:37:30.75519129 +0000 UTC m=+876.771000688" observedRunningTime="2026-02-17 00:37:31.325820412 +0000 UTC m=+877.341629830" watchObservedRunningTime="2026-02-17 00:37:31.343169556 +0000 UTC m=+877.358978954" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.371913 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-j2dnr" podStartSLOduration=1.5507641319999999 podStartE2EDuration="4.371879781s" podCreationTimestamp="2026-02-17 00:37:27 +0000 UTC" firstStartedPulling="2026-02-17 00:37:27.925497017 +0000 UTC m=+873.941306415" lastFinishedPulling="2026-02-17 00:37:30.746612666 +0000 UTC m=+876.762422064" observedRunningTime="2026-02-17 00:37:31.36123986 +0000 UTC m=+877.377049288" watchObservedRunningTime="2026-02-17 00:37:31.371879781 +0000 UTC m=+877.387689189" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.398904 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.404631 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4r8vh"] Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.733627 4805 scope.go:117] "RemoveContainer" containerID="c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.778353 4805 scope.go:117] "RemoveContainer" containerID="75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.799404 4805 scope.go:117] "RemoveContainer" containerID="191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2" Feb 17 00:37:31 crc kubenswrapper[4805]: E0217 00:37:31.799876 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2\": container with ID starting with 191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2 not found: ID does not exist" containerID="191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.799915 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2"} err="failed to get container status \"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2\": rpc error: code = NotFound desc = could not find container \"191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2\": container with ID starting with 191db1f0bd87ff7d7bf3345af3240a799d8590e50972f75ab009806b196a20f2 not found: ID does not exist" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.799938 4805 scope.go:117] "RemoveContainer" containerID="c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256" Feb 17 00:37:31 crc kubenswrapper[4805]: E0217 00:37:31.800260 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256\": container with ID starting with c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256 not found: ID does not exist" containerID="c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.800280 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256"} err="failed to get container status \"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256\": rpc error: code = NotFound desc = could not find container \"c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256\": container with ID starting with c3fa308552627b75304cb7ce30db8669b166173cdaf7340aa3250b790b7a7256 not found: ID does not exist" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.800295 4805 scope.go:117] "RemoveContainer" containerID="75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41" Feb 17 00:37:31 crc kubenswrapper[4805]: E0217 00:37:31.800701 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41\": container with ID starting with 75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41 not found: ID does not exist" containerID="75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41" Feb 17 00:37:31 crc kubenswrapper[4805]: I0217 00:37:31.800723 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41"} err="failed to get container status \"75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41\": rpc error: code = NotFound desc = could not find container \"75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41\": container with ID starting with 75bf59d7bdebd8789fb0b41e0c7f69d383de61c39308c48917374d4636a13a41 not found: ID does not exist" Feb 17 00:37:32 crc kubenswrapper[4805]: I0217 00:37:32.334818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" event={"ID":"3a4aeea4-aa38-45c9-9aaa-13670a1602fe","Type":"ContainerStarted","Data":"8287c0b69cdb86f6e347d6064c38b948ec091d42c98b12a81d795b2a7093197b"} Feb 17 00:37:32 crc kubenswrapper[4805]: I0217 00:37:32.358883 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-ww84p" podStartSLOduration=2.547595164 podStartE2EDuration="5.358861544s" podCreationTimestamp="2026-02-17 00:37:27 +0000 UTC" firstStartedPulling="2026-02-17 00:37:28.980340036 +0000 UTC m=+874.996149434" lastFinishedPulling="2026-02-17 00:37:31.791606416 +0000 UTC m=+877.807415814" observedRunningTime="2026-02-17 00:37:32.354588287 +0000 UTC m=+878.370397725" watchObservedRunningTime="2026-02-17 00:37:32.358861544 +0000 UTC m=+878.374670972" Feb 17 00:37:32 crc kubenswrapper[4805]: I0217 00:37:32.794358 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" path="/var/lib/kubelet/pods/35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa/volumes" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.752146 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:33 crc kubenswrapper[4805]: E0217 00:37:33.753739 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="registry-server" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.753763 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="registry-server" Feb 17 00:37:33 crc kubenswrapper[4805]: E0217 00:37:33.753790 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="extract-utilities" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.753803 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="extract-utilities" Feb 17 00:37:33 crc kubenswrapper[4805]: E0217 00:37:33.753818 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="extract-content" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.753830 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="extract-content" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.754082 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e2f514-2f4e-48ec-9d7d-8e0fefccfdfa" containerName="registry-server" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.755641 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.760463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.872212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.872280 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgzj\" (UniqueName: \"kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.872313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.973759 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgzj\" (UniqueName: \"kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.973840 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.973942 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.974381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:33 crc kubenswrapper[4805]: I0217 00:37:33.974437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.009262 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgzj\" (UniqueName: \"kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj\") pod \"redhat-marketplace-97q29\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.075226 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.297361 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.356168 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" event={"ID":"3864820c-89a0-409c-84a6-7b4145026b77","Type":"ContainerStarted","Data":"5104372c89583460e01ae6ccb4a4f7eb7502f47a41d42843e7640831a84933c9"} Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.357209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerStarted","Data":"a65ce8ae59d8d353946acb1c97332e0d004cf898db3134653827768791b38165"} Feb 17 00:37:34 crc kubenswrapper[4805]: I0217 00:37:34.375609 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7lswf" podStartSLOduration=2.124316493 podStartE2EDuration="7.375584591s" podCreationTimestamp="2026-02-17 00:37:27 +0000 UTC" firstStartedPulling="2026-02-17 00:37:28.380832956 +0000 UTC m=+874.396642354" lastFinishedPulling="2026-02-17 00:37:33.632101054 +0000 UTC m=+879.647910452" observedRunningTime="2026-02-17 00:37:34.374196783 +0000 UTC m=+880.390006181" watchObservedRunningTime="2026-02-17 00:37:34.375584591 +0000 UTC m=+880.391393989" Feb 17 00:37:35 crc kubenswrapper[4805]: I0217 00:37:35.367860 4805 generic.go:334] "Generic (PLEG): container finished" podID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerID="06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e" exitCode=0 Feb 17 00:37:35 crc kubenswrapper[4805]: I0217 00:37:35.367968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerDied","Data":"06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e"} Feb 17 00:37:36 crc kubenswrapper[4805]: I0217 00:37:36.375220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerStarted","Data":"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc"} Feb 17 00:37:37 crc kubenswrapper[4805]: I0217 00:37:37.384113 4805 generic.go:334] "Generic (PLEG): container finished" podID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerID="05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc" exitCode=0 Feb 17 00:37:37 crc kubenswrapper[4805]: I0217 00:37:37.384201 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerDied","Data":"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc"} Feb 17 00:37:37 crc kubenswrapper[4805]: I0217 00:37:37.896454 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-j2dnr" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.308220 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.308530 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.313800 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.391553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerStarted","Data":"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074"} Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.398378 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.413818 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-97q29" podStartSLOduration=2.928702893 podStartE2EDuration="5.413796445s" podCreationTimestamp="2026-02-17 00:37:33 +0000 UTC" firstStartedPulling="2026-02-17 00:37:35.373199316 +0000 UTC m=+881.389008724" lastFinishedPulling="2026-02-17 00:37:37.858292858 +0000 UTC m=+883.874102276" observedRunningTime="2026-02-17 00:37:38.408633434 +0000 UTC m=+884.424442842" watchObservedRunningTime="2026-02-17 00:37:38.413796445 +0000 UTC m=+884.429605833" Feb 17 00:37:38 crc kubenswrapper[4805]: I0217 00:37:38.490502 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.551232 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.553224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.576780 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.623896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.624050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr5jv\" (UniqueName: \"kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.624131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.725252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.725788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr5jv\" (UniqueName: \"kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.725844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.726436 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.726689 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.752661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr5jv\" (UniqueName: \"kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv\") pod \"community-operators-tjphg\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:43 crc kubenswrapper[4805]: I0217 00:37:43.877674 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.077003 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.077589 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.158050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.386210 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:44 crc kubenswrapper[4805]: W0217 00:37:44.391550 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fe4833b_f9fb_4be1_811a_5e2ffd1ee251.slice/crio-893d7c323f554dbc4e16ef613fcdaca190b4a11255d80f353cf71d8e974dcf91 WatchSource:0}: Error finding container 893d7c323f554dbc4e16ef613fcdaca190b4a11255d80f353cf71d8e974dcf91: Status 404 returned error can't find the container with id 893d7c323f554dbc4e16ef613fcdaca190b4a11255d80f353cf71d8e974dcf91 Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.442674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerStarted","Data":"893d7c323f554dbc4e16ef613fcdaca190b4a11255d80f353cf71d8e974dcf91"} Feb 17 00:37:44 crc kubenswrapper[4805]: I0217 00:37:44.487060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:45 crc kubenswrapper[4805]: I0217 00:37:45.454556 4805 generic.go:334] "Generic (PLEG): container finished" podID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerID="e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012" exitCode=0 Feb 17 00:37:45 crc kubenswrapper[4805]: I0217 00:37:45.454646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerDied","Data":"e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012"} Feb 17 00:37:45 crc kubenswrapper[4805]: I0217 00:37:45.457947 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:37:46 crc kubenswrapper[4805]: I0217 00:37:46.464402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerStarted","Data":"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39"} Feb 17 00:37:46 crc kubenswrapper[4805]: I0217 00:37:46.517205 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:46 crc kubenswrapper[4805]: I0217 00:37:46.517500 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-97q29" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="registry-server" containerID="cri-o://34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074" gracePeriod=2 Feb 17 00:37:46 crc kubenswrapper[4805]: I0217 00:37:46.938991 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.078645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities\") pod \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.078760 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgzj\" (UniqueName: \"kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj\") pod \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.078816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content\") pod \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\" (UID: \"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c\") " Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.079638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities" (OuterVolumeSpecName: "utilities") pod "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" (UID: "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.084659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj" (OuterVolumeSpecName: "kube-api-access-fxgzj") pod "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" (UID: "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c"). InnerVolumeSpecName "kube-api-access-fxgzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.117611 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" (UID: "c39f1eab-ecb5-4045-b27e-3a2f2d066b8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.180592 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.180626 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgzj\" (UniqueName: \"kubernetes.io/projected/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-kube-api-access-fxgzj\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.180641 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.471236 4805 generic.go:334] "Generic (PLEG): container finished" podID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerID="068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39" exitCode=0 Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.471319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerDied","Data":"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39"} Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.474097 4805 generic.go:334] "Generic (PLEG): container finished" podID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerID="34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074" exitCode=0 Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.474134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerDied","Data":"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074"} Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.474156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-97q29" event={"ID":"c39f1eab-ecb5-4045-b27e-3a2f2d066b8c","Type":"ContainerDied","Data":"a65ce8ae59d8d353946acb1c97332e0d004cf898db3134653827768791b38165"} Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.474176 4805 scope.go:117] "RemoveContainer" containerID="34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.474292 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-97q29" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.495199 4805 scope.go:117] "RemoveContainer" containerID="05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.517279 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.521830 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-97q29"] Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.549631 4805 scope.go:117] "RemoveContainer" containerID="06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.568946 4805 scope.go:117] "RemoveContainer" containerID="34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074" Feb 17 00:37:47 crc kubenswrapper[4805]: E0217 00:37:47.574744 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074\": container with ID starting with 34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074 not found: ID does not exist" containerID="34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.574792 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074"} err="failed to get container status \"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074\": rpc error: code = NotFound desc = could not find container \"34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074\": container with ID starting with 34c7f4eeeea4336b907b76a6d3e2e07466e2b435c399e7bf9d9ea3629dc33074 not found: ID does not exist" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.574824 4805 scope.go:117] "RemoveContainer" containerID="05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc" Feb 17 00:37:47 crc kubenswrapper[4805]: E0217 00:37:47.575089 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc\": container with ID starting with 05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc not found: ID does not exist" containerID="05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.575120 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc"} err="failed to get container status \"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc\": rpc error: code = NotFound desc = could not find container \"05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc\": container with ID starting with 05627ddb5379544a3f6fd0dbf9aaaf4bf7e21e965776d1ef692ffd8bf2894fdc not found: ID does not exist" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.575142 4805 scope.go:117] "RemoveContainer" containerID="06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e" Feb 17 00:37:47 crc kubenswrapper[4805]: E0217 00:37:47.575531 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e\": container with ID starting with 06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e not found: ID does not exist" containerID="06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e" Feb 17 00:37:47 crc kubenswrapper[4805]: I0217 00:37:47.575589 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e"} err="failed to get container status \"06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e\": rpc error: code = NotFound desc = could not find container \"06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e\": container with ID starting with 06b6b329777692b4e379f5335132a672a87e52b6e39365198428bc46d817328e not found: ID does not exist" Feb 17 00:37:48 crc kubenswrapper[4805]: I0217 00:37:48.454492 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-m5d7x" Feb 17 00:37:48 crc kubenswrapper[4805]: I0217 00:37:48.801386 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" path="/var/lib/kubelet/pods/c39f1eab-ecb5-4045-b27e-3a2f2d066b8c/volumes" Feb 17 00:37:49 crc kubenswrapper[4805]: I0217 00:37:49.494758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerStarted","Data":"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93"} Feb 17 00:37:49 crc kubenswrapper[4805]: I0217 00:37:49.517637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tjphg" podStartSLOduration=3.296785246 podStartE2EDuration="6.517611692s" podCreationTimestamp="2026-02-17 00:37:43 +0000 UTC" firstStartedPulling="2026-02-17 00:37:45.457570761 +0000 UTC m=+891.473380189" lastFinishedPulling="2026-02-17 00:37:48.678397197 +0000 UTC m=+894.694206635" observedRunningTime="2026-02-17 00:37:49.514008033 +0000 UTC m=+895.529817441" watchObservedRunningTime="2026-02-17 00:37:49.517611692 +0000 UTC m=+895.533421100" Feb 17 00:37:53 crc kubenswrapper[4805]: I0217 00:37:53.077170 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:37:53 crc kubenswrapper[4805]: I0217 00:37:53.077591 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:37:53 crc kubenswrapper[4805]: I0217 00:37:53.878409 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:53 crc kubenswrapper[4805]: I0217 00:37:53.878456 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:53 crc kubenswrapper[4805]: I0217 00:37:53.926226 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:54 crc kubenswrapper[4805]: I0217 00:37:54.599067 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:54 crc kubenswrapper[4805]: I0217 00:37:54.661942 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:56 crc kubenswrapper[4805]: I0217 00:37:56.549486 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tjphg" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="registry-server" containerID="cri-o://ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93" gracePeriod=2 Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.096133 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.262131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr5jv\" (UniqueName: \"kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv\") pod \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.262188 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content\") pod \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.264511 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities\") pod \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\" (UID: \"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251\") " Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.265382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities" (OuterVolumeSpecName: "utilities") pod "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" (UID: "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.267910 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv" (OuterVolumeSpecName: "kube-api-access-gr5jv") pod "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" (UID: "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251"). InnerVolumeSpecName "kube-api-access-gr5jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.348406 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" (UID: "2fe4833b-f9fb-4be1-811a-5e2ffd1ee251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.366392 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr5jv\" (UniqueName: \"kubernetes.io/projected/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-kube-api-access-gr5jv\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.366423 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.366433 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.559930 4805 generic.go:334] "Generic (PLEG): container finished" podID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerID="ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93" exitCode=0 Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.559983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerDied","Data":"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93"} Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.560024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tjphg" event={"ID":"2fe4833b-f9fb-4be1-811a-5e2ffd1ee251","Type":"ContainerDied","Data":"893d7c323f554dbc4e16ef613fcdaca190b4a11255d80f353cf71d8e974dcf91"} Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.560042 4805 scope.go:117] "RemoveContainer" containerID="ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.560060 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tjphg" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.576734 4805 scope.go:117] "RemoveContainer" containerID="068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.602794 4805 scope.go:117] "RemoveContainer" containerID="e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.607236 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.615029 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tjphg"] Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.628512 4805 scope.go:117] "RemoveContainer" containerID="ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93" Feb 17 00:37:57 crc kubenswrapper[4805]: E0217 00:37:57.629124 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93\": container with ID starting with ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93 not found: ID does not exist" containerID="ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.629204 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93"} err="failed to get container status \"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93\": rpc error: code = NotFound desc = could not find container \"ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93\": container with ID starting with ae6a6bb5e4049f05a5591a52fe4e1b0fe6cfed5144a7b2c8b0eff96fa7ffac93 not found: ID does not exist" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.629237 4805 scope.go:117] "RemoveContainer" containerID="068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39" Feb 17 00:37:57 crc kubenswrapper[4805]: E0217 00:37:57.629693 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39\": container with ID starting with 068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39 not found: ID does not exist" containerID="068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.629726 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39"} err="failed to get container status \"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39\": rpc error: code = NotFound desc = could not find container \"068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39\": container with ID starting with 068415894eab95682537fe69a573aa9f4a9e00722513435684cdb29577cbff39 not found: ID does not exist" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.629745 4805 scope.go:117] "RemoveContainer" containerID="e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012" Feb 17 00:37:57 crc kubenswrapper[4805]: E0217 00:37:57.630177 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012\": container with ID starting with e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012 not found: ID does not exist" containerID="e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012" Feb 17 00:37:57 crc kubenswrapper[4805]: I0217 00:37:57.630209 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012"} err="failed to get container status \"e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012\": rpc error: code = NotFound desc = could not find container \"e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012\": container with ID starting with e91f3acdc5f4f5d4b5a4e28e435bc3d78f5b935fd4b2168d8a668ecf39db8012 not found: ID does not exist" Feb 17 00:37:58 crc kubenswrapper[4805]: I0217 00:37:58.800316 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" path="/var/lib/kubelet/pods/2fe4833b-f9fb-4be1-811a-5e2ffd1ee251/volumes" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.573379 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574621 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="extract-utilities" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574641 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="extract-utilities" Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574683 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574694 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574710 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="extract-utilities" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574719 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="extract-utilities" Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574757 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="extract-content" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574768 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="extract-content" Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574783 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574793 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: E0217 00:38:00.574806 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="extract-content" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.574814 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="extract-content" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.575066 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe4833b-f9fb-4be1-811a-5e2ffd1ee251" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.575107 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39f1eab-ecb5-4045-b27e-3a2f2d066b8c" containerName="registry-server" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.576699 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.576808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.717675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.717738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5h6\" (UniqueName: \"kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.717851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.818892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.819244 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td5h6\" (UniqueName: \"kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.819299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.819541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.819896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.839346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td5h6\" (UniqueName: \"kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6\") pod \"certified-operators-wvbl5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:00 crc kubenswrapper[4805]: I0217 00:38:00.944508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:01 crc kubenswrapper[4805]: I0217 00:38:01.420639 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:01 crc kubenswrapper[4805]: I0217 00:38:01.633056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerStarted","Data":"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a"} Feb 17 00:38:01 crc kubenswrapper[4805]: I0217 00:38:01.633304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerStarted","Data":"b78b8748b654f16499742270765e63a6bfa434e3b4f49c6e694c525930806677"} Feb 17 00:38:02 crc kubenswrapper[4805]: I0217 00:38:02.641638 4805 generic.go:334] "Generic (PLEG): container finished" podID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerID="75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a" exitCode=0 Feb 17 00:38:02 crc kubenswrapper[4805]: I0217 00:38:02.641678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerDied","Data":"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a"} Feb 17 00:38:02 crc kubenswrapper[4805]: I0217 00:38:02.642080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerStarted","Data":"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb"} Feb 17 00:38:03 crc kubenswrapper[4805]: I0217 00:38:03.548396 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t9l4h" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" containerName="console" containerID="cri-o://ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1" gracePeriod=15 Feb 17 00:38:03 crc kubenswrapper[4805]: I0217 00:38:03.650926 4805 generic.go:334] "Generic (PLEG): container finished" podID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerID="e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb" exitCode=0 Feb 17 00:38:03 crc kubenswrapper[4805]: I0217 00:38:03.650979 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerDied","Data":"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb"} Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.023007 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t9l4h_24781b06-2cc6-49d0-a506-b992048e1c84/console/0.log" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.023260 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171246 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171369 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bzbm\" (UniqueName: \"kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm\") pod \"24781b06-2cc6-49d0-a506-b992048e1c84\" (UID: \"24781b06-2cc6-49d0-a506-b992048e1c84\") " Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.171967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca" (OuterVolumeSpecName: "service-ca") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.172230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config" (OuterVolumeSpecName: "console-config") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.172489 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.172992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.177337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm" (OuterVolumeSpecName: "kube-api-access-8bzbm") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "kube-api-access-8bzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.178248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.195932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "24781b06-2cc6-49d0-a506-b992048e1c84" (UID: "24781b06-2cc6-49d0-a506-b992048e1c84"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273592 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273628 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273640 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273651 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24781b06-2cc6-49d0-a506-b992048e1c84-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273661 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273671 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24781b06-2cc6-49d0-a506-b992048e1c84-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.273682 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bzbm\" (UniqueName: \"kubernetes.io/projected/24781b06-2cc6-49d0-a506-b992048e1c84-kube-api-access-8bzbm\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659111 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t9l4h_24781b06-2cc6-49d0-a506-b992048e1c84/console/0.log" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659417 4805 generic.go:334] "Generic (PLEG): container finished" podID="24781b06-2cc6-49d0-a506-b992048e1c84" containerID="ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1" exitCode=2 Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659498 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t9l4h" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659501 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9l4h" event={"ID":"24781b06-2cc6-49d0-a506-b992048e1c84","Type":"ContainerDied","Data":"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1"} Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659605 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t9l4h" event={"ID":"24781b06-2cc6-49d0-a506-b992048e1c84","Type":"ContainerDied","Data":"4ea166231cba90eb0b12bb5c116e413a7580c288ba54f5804ffa58e4bb59dcab"} Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.659630 4805 scope.go:117] "RemoveContainer" containerID="ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.662700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerStarted","Data":"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae"} Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.683158 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wvbl5" podStartSLOduration=2.262900965 podStartE2EDuration="4.683136345s" podCreationTimestamp="2026-02-17 00:38:00 +0000 UTC" firstStartedPulling="2026-02-17 00:38:01.635198585 +0000 UTC m=+907.651007993" lastFinishedPulling="2026-02-17 00:38:04.055433975 +0000 UTC m=+910.071243373" observedRunningTime="2026-02-17 00:38:04.677573863 +0000 UTC m=+910.693383281" watchObservedRunningTime="2026-02-17 00:38:04.683136345 +0000 UTC m=+910.698945743" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.700155 4805 scope.go:117] "RemoveContainer" containerID="ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1" Feb 17 00:38:04 crc kubenswrapper[4805]: E0217 00:38:04.700769 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1\": container with ID starting with ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1 not found: ID does not exist" containerID="ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.700813 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1"} err="failed to get container status \"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1\": rpc error: code = NotFound desc = could not find container \"ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1\": container with ID starting with ed8dadbfdb3468f89085f281901c146db10a93a4c6bf725602b097f0208849d1 not found: ID does not exist" Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.710615 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.715349 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t9l4h"] Feb 17 00:38:04 crc kubenswrapper[4805]: I0217 00:38:04.792259 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" path="/var/lib/kubelet/pods/24781b06-2cc6-49d0-a506-b992048e1c84/volumes" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.623747 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn"] Feb 17 00:38:06 crc kubenswrapper[4805]: E0217 00:38:06.624004 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" containerName="console" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.624016 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" containerName="console" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.624124 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="24781b06-2cc6-49d0-a506-b992048e1c84" containerName="console" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.625131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.628862 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.640722 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn"] Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.824368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.824449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqkjw\" (UniqueName: \"kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.824480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.925763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqkjw\" (UniqueName: \"kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.925835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.925989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.926414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.926429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.955153 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqkjw\" (UniqueName: \"kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:06 crc kubenswrapper[4805]: I0217 00:38:06.992288 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:07 crc kubenswrapper[4805]: I0217 00:38:07.250275 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn"] Feb 17 00:38:07 crc kubenswrapper[4805]: I0217 00:38:07.696180 4805 generic.go:334] "Generic (PLEG): container finished" podID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerID="9e7ca940e72f9c91d09112496a91c7c93ddefb5a767dcd98c95aeb9308d456c1" exitCode=0 Feb 17 00:38:07 crc kubenswrapper[4805]: I0217 00:38:07.696269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" event={"ID":"3cc3d85a-bf6d-4592-a085-dd47efd5331f","Type":"ContainerDied","Data":"9e7ca940e72f9c91d09112496a91c7c93ddefb5a767dcd98c95aeb9308d456c1"} Feb 17 00:38:07 crc kubenswrapper[4805]: I0217 00:38:07.696609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" event={"ID":"3cc3d85a-bf6d-4592-a085-dd47efd5331f","Type":"ContainerStarted","Data":"bedf5b72db568d3cd1b8f300cfb3ae27aa229973c5aa0349c07bbda55bdebf1d"} Feb 17 00:38:09 crc kubenswrapper[4805]: I0217 00:38:09.719899 4805 generic.go:334] "Generic (PLEG): container finished" podID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerID="32e79fd17aab25156e677ad818804440dde07c476e97d5d31e9d6b40482f4d91" exitCode=0 Feb 17 00:38:09 crc kubenswrapper[4805]: I0217 00:38:09.719961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" event={"ID":"3cc3d85a-bf6d-4592-a085-dd47efd5331f","Type":"ContainerDied","Data":"32e79fd17aab25156e677ad818804440dde07c476e97d5d31e9d6b40482f4d91"} Feb 17 00:38:10 crc kubenswrapper[4805]: I0217 00:38:10.732980 4805 generic.go:334] "Generic (PLEG): container finished" podID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerID="ee5820c056bea5fe8a0ab272514c69e6932271b565364c2442a88c78182d4f1d" exitCode=0 Feb 17 00:38:10 crc kubenswrapper[4805]: I0217 00:38:10.733046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" event={"ID":"3cc3d85a-bf6d-4592-a085-dd47efd5331f","Type":"ContainerDied","Data":"ee5820c056bea5fe8a0ab272514c69e6932271b565364c2442a88c78182d4f1d"} Feb 17 00:38:10 crc kubenswrapper[4805]: I0217 00:38:10.945988 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:10 crc kubenswrapper[4805]: I0217 00:38:10.946374 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:11 crc kubenswrapper[4805]: I0217 00:38:11.033088 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:11 crc kubenswrapper[4805]: I0217 00:38:11.786853 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.065437 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.213202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle\") pod \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.213311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util\") pod \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.214277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle" (OuterVolumeSpecName: "bundle") pod "3cc3d85a-bf6d-4592-a085-dd47efd5331f" (UID: "3cc3d85a-bf6d-4592-a085-dd47efd5331f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.214887 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqkjw\" (UniqueName: \"kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw\") pod \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\" (UID: \"3cc3d85a-bf6d-4592-a085-dd47efd5331f\") " Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.221563 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw" (OuterVolumeSpecName: "kube-api-access-jqkjw") pod "3cc3d85a-bf6d-4592-a085-dd47efd5331f" (UID: "3cc3d85a-bf6d-4592-a085-dd47efd5331f"). InnerVolumeSpecName "kube-api-access-jqkjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.226515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util" (OuterVolumeSpecName: "util") pod "3cc3d85a-bf6d-4592-a085-dd47efd5331f" (UID: "3cc3d85a-bf6d-4592-a085-dd47efd5331f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.316276 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqkjw\" (UniqueName: \"kubernetes.io/projected/3cc3d85a-bf6d-4592-a085-dd47efd5331f-kube-api-access-jqkjw\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.316306 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.316317 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cc3d85a-bf6d-4592-a085-dd47efd5331f-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.749588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" event={"ID":"3cc3d85a-bf6d-4592-a085-dd47efd5331f","Type":"ContainerDied","Data":"bedf5b72db568d3cd1b8f300cfb3ae27aa229973c5aa0349c07bbda55bdebf1d"} Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.749911 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedf5b72db568d3cd1b8f300cfb3ae27aa229973c5aa0349c07bbda55bdebf1d" Feb 17 00:38:12 crc kubenswrapper[4805]: I0217 00:38:12.749598 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn" Feb 17 00:38:14 crc kubenswrapper[4805]: I0217 00:38:14.156486 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:14 crc kubenswrapper[4805]: I0217 00:38:14.765202 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wvbl5" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="registry-server" containerID="cri-o://07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae" gracePeriod=2 Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.229509 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.364522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content\") pod \"06297f22-f922-4055-a8b8-084dc9e2fad5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.364588 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td5h6\" (UniqueName: \"kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6\") pod \"06297f22-f922-4055-a8b8-084dc9e2fad5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.364768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities\") pod \"06297f22-f922-4055-a8b8-084dc9e2fad5\" (UID: \"06297f22-f922-4055-a8b8-084dc9e2fad5\") " Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.365512 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities" (OuterVolumeSpecName: "utilities") pod "06297f22-f922-4055-a8b8-084dc9e2fad5" (UID: "06297f22-f922-4055-a8b8-084dc9e2fad5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.380568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6" (OuterVolumeSpecName: "kube-api-access-td5h6") pod "06297f22-f922-4055-a8b8-084dc9e2fad5" (UID: "06297f22-f922-4055-a8b8-084dc9e2fad5"). InnerVolumeSpecName "kube-api-access-td5h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.416054 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06297f22-f922-4055-a8b8-084dc9e2fad5" (UID: "06297f22-f922-4055-a8b8-084dc9e2fad5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.466593 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.466632 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td5h6\" (UniqueName: \"kubernetes.io/projected/06297f22-f922-4055-a8b8-084dc9e2fad5-kube-api-access-td5h6\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.466646 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06297f22-f922-4055-a8b8-084dc9e2fad5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.778419 4805 generic.go:334] "Generic (PLEG): container finished" podID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerID="07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae" exitCode=0 Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.778488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerDied","Data":"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae"} Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.778526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvbl5" event={"ID":"06297f22-f922-4055-a8b8-084dc9e2fad5","Type":"ContainerDied","Data":"b78b8748b654f16499742270765e63a6bfa434e3b4f49c6e694c525930806677"} Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.778555 4805 scope.go:117] "RemoveContainer" containerID="07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.778719 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvbl5" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.815724 4805 scope.go:117] "RemoveContainer" containerID="e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.823744 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.842727 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wvbl5"] Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.861412 4805 scope.go:117] "RemoveContainer" containerID="75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.890954 4805 scope.go:117] "RemoveContainer" containerID="07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae" Feb 17 00:38:15 crc kubenswrapper[4805]: E0217 00:38:15.891738 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae\": container with ID starting with 07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae not found: ID does not exist" containerID="07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.891767 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae"} err="failed to get container status \"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae\": rpc error: code = NotFound desc = could not find container \"07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae\": container with ID starting with 07ce8446b44a1b65dd8526445d187985f412b29da5a3805d7095bf0b187e70ae not found: ID does not exist" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.891788 4805 scope.go:117] "RemoveContainer" containerID="e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb" Feb 17 00:38:15 crc kubenswrapper[4805]: E0217 00:38:15.892220 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb\": container with ID starting with e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb not found: ID does not exist" containerID="e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.892268 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb"} err="failed to get container status \"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb\": rpc error: code = NotFound desc = could not find container \"e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb\": container with ID starting with e158be960be44e1e5c9eba5081659abc63d41d1a25a483f701d701a29d31debb not found: ID does not exist" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.892286 4805 scope.go:117] "RemoveContainer" containerID="75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a" Feb 17 00:38:15 crc kubenswrapper[4805]: E0217 00:38:15.892815 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a\": container with ID starting with 75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a not found: ID does not exist" containerID="75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a" Feb 17 00:38:15 crc kubenswrapper[4805]: I0217 00:38:15.892837 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a"} err="failed to get container status \"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a\": rpc error: code = NotFound desc = could not find container \"75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a\": container with ID starting with 75ae69f5c4ff5f2ebee8705ea47505d6f98f6a73852944ae130e1ea6cee6fd2a not found: ID does not exist" Feb 17 00:38:16 crc kubenswrapper[4805]: I0217 00:38:16.793211 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" path="/var/lib/kubelet/pods/06297f22-f922-4055-a8b8-084dc9e2fad5/volumes" Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.076770 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.078278 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.078441 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.079057 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.079172 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945" gracePeriod=600 Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.841260 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945" exitCode=0 Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.841340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945"} Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.841663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620"} Feb 17 00:38:23 crc kubenswrapper[4805]: I0217 00:38:23.841688 4805 scope.go:117] "RemoveContainer" containerID="94681fae909df52b2f0ea3231365723006f05038e8db255093526e2aabbaa471" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103450 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf"] Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103702 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="util" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103714 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="util" Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103726 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="registry-server" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103733 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="registry-server" Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103742 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="extract-content" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103748 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="extract-content" Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103759 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="extract-utilities" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103765 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="extract-utilities" Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103774 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="extract" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103780 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="extract" Feb 17 00:38:25 crc kubenswrapper[4805]: E0217 00:38:25.103790 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="pull" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103796 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="pull" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103922 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="06297f22-f922-4055-a8b8-084dc9e2fad5" containerName="registry-server" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.103931 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc3d85a-bf6d-4592-a085-dd47efd5331f" containerName="extract" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.104432 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.105765 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.107592 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.107669 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.108726 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.109395 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2t2xz" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.123304 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf"] Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.202515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tnbb\" (UniqueName: \"kubernetes.io/projected/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-kube-api-access-7tnbb\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.202615 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-webhook-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.202641 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-apiservice-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.303519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-webhook-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.303575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-apiservice-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.303652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tnbb\" (UniqueName: \"kubernetes.io/projected/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-kube-api-access-7tnbb\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.310166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-apiservice-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.321915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-webhook-cert\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.324500 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tnbb\" (UniqueName: \"kubernetes.io/projected/1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8-kube-api-access-7tnbb\") pod \"metallb-operator-controller-manager-8595899c55-2hhkf\" (UID: \"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8\") " pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.347702 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr"] Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.348476 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.352267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.352267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8dctp" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.352267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.368057 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr"] Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.418686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.505715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-apiservice-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.505772 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxhp\" (UniqueName: \"kubernetes.io/projected/8d2f5088-2aea-4d14-96a1-e1b14904efa0-kube-api-access-wvxhp\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.505812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-webhook-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.606861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-apiservice-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.607144 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxhp\" (UniqueName: \"kubernetes.io/projected/8d2f5088-2aea-4d14-96a1-e1b14904efa0-kube-api-access-wvxhp\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.607178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-webhook-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.621185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-apiservice-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.621714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d2f5088-2aea-4d14-96a1-e1b14904efa0-webhook-cert\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.626086 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxhp\" (UniqueName: \"kubernetes.io/projected/8d2f5088-2aea-4d14-96a1-e1b14904efa0-kube-api-access-wvxhp\") pod \"metallb-operator-webhook-server-676bc65957-7vlsr\" (UID: \"8d2f5088-2aea-4d14-96a1-e1b14904efa0\") " pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.665920 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:25 crc kubenswrapper[4805]: I0217 00:38:25.863443 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf"] Feb 17 00:38:25 crc kubenswrapper[4805]: W0217 00:38:25.872658 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a4b50ae_ecf2_4925_8d51_c9e1d1cdd2e8.slice/crio-774efc9c2452b87178f9067684576f423b46158e7256273bc9fe0af83b24ef3d WatchSource:0}: Error finding container 774efc9c2452b87178f9067684576f423b46158e7256273bc9fe0af83b24ef3d: Status 404 returned error can't find the container with id 774efc9c2452b87178f9067684576f423b46158e7256273bc9fe0af83b24ef3d Feb 17 00:38:26 crc kubenswrapper[4805]: W0217 00:38:26.082859 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d2f5088_2aea_4d14_96a1_e1b14904efa0.slice/crio-1fe04125534884abde6b1b4c00cdc5e5eb0b48cd9a8205e0832924129f004b1a WatchSource:0}: Error finding container 1fe04125534884abde6b1b4c00cdc5e5eb0b48cd9a8205e0832924129f004b1a: Status 404 returned error can't find the container with id 1fe04125534884abde6b1b4c00cdc5e5eb0b48cd9a8205e0832924129f004b1a Feb 17 00:38:26 crc kubenswrapper[4805]: I0217 00:38:26.083363 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr"] Feb 17 00:38:26 crc kubenswrapper[4805]: I0217 00:38:26.862806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" event={"ID":"8d2f5088-2aea-4d14-96a1-e1b14904efa0","Type":"ContainerStarted","Data":"1fe04125534884abde6b1b4c00cdc5e5eb0b48cd9a8205e0832924129f004b1a"} Feb 17 00:38:26 crc kubenswrapper[4805]: I0217 00:38:26.864753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" event={"ID":"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8","Type":"ContainerStarted","Data":"774efc9c2452b87178f9067684576f423b46158e7256273bc9fe0af83b24ef3d"} Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.912380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" event={"ID":"8d2f5088-2aea-4d14-96a1-e1b14904efa0","Type":"ContainerStarted","Data":"112c183449619428f762c9e2be875330b15692882c1296b35c0d971c464f294b"} Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.912830 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.914310 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" event={"ID":"1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8","Type":"ContainerStarted","Data":"74108a77457de01668bb620a17a5b52280f65703f06cde708abd925fd4f3604c"} Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.914452 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.932228 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" podStartSLOduration=1.304097468 podStartE2EDuration="6.9322108s" podCreationTimestamp="2026-02-17 00:38:25 +0000 UTC" firstStartedPulling="2026-02-17 00:38:26.085609945 +0000 UTC m=+932.101419343" lastFinishedPulling="2026-02-17 00:38:31.713723267 +0000 UTC m=+937.729532675" observedRunningTime="2026-02-17 00:38:31.929947359 +0000 UTC m=+937.945756757" watchObservedRunningTime="2026-02-17 00:38:31.9322108 +0000 UTC m=+937.948020208" Feb 17 00:38:31 crc kubenswrapper[4805]: I0217 00:38:31.956053 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" podStartSLOduration=1.138809039 podStartE2EDuration="6.956035652s" podCreationTimestamp="2026-02-17 00:38:25 +0000 UTC" firstStartedPulling="2026-02-17 00:38:25.875204562 +0000 UTC m=+931.891013960" lastFinishedPulling="2026-02-17 00:38:31.692431135 +0000 UTC m=+937.708240573" observedRunningTime="2026-02-17 00:38:31.95376631 +0000 UTC m=+937.969575718" watchObservedRunningTime="2026-02-17 00:38:31.956035652 +0000 UTC m=+937.971845060" Feb 17 00:38:45 crc kubenswrapper[4805]: I0217 00:38:45.671037 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-676bc65957-7vlsr" Feb 17 00:39:05 crc kubenswrapper[4805]: I0217 00:39:05.422828 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-8595899c55-2hhkf" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.233659 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8kmsb"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.236941 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.240443 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.242366 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.243653 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-94hrc" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.250067 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.256851 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.258844 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.260520 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.335346 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m7ccg"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.336772 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.339110 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.339109 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.339486 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.339651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pp2gc" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.351316 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-mpwzw"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.353932 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-sockets\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kq8\" (UniqueName: \"kubernetes.io/projected/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-kube-api-access-x4kq8\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprqd\" (UniqueName: \"kubernetes.io/projected/a824b3ba-107f-4f67-bcca-690632e343c2-kube-api-access-lprqd\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-conf\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357654 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-startup\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-reloader\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics-certs\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.357794 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a824b3ba-107f-4f67-bcca-690632e343c2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.359109 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mpwzw"] Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics-certs\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a824b3ba-107f-4f67-bcca-690632e343c2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-sockets\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4kq8\" (UniqueName: \"kubernetes.io/projected/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-kube-api-access-x4kq8\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-conf\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460351 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lprqd\" (UniqueName: \"kubernetes.io/projected/a824b3ba-107f-4f67-bcca-690632e343c2-kube-api-access-lprqd\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx84k\" (UniqueName: \"kubernetes.io/projected/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-kube-api-access-qx84k\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-startup\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460497 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-cert\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-reloader\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dm96\" (UniqueName: \"kubernetes.io/projected/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-kube-api-access-9dm96\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metallb-excludel2\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.460600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.461094 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-sockets\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.461400 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-reloader\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.461612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.461875 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-conf\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.462314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-frr-startup\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.480129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-metrics-certs\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.481038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a824b3ba-107f-4f67-bcca-690632e343c2-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.482304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4kq8\" (UniqueName: \"kubernetes.io/projected/dc5ad3ec-0480-4f9f-ac09-1506aa092f49-kube-api-access-x4kq8\") pod \"frr-k8s-8kmsb\" (UID: \"dc5ad3ec-0480-4f9f-ac09-1506aa092f49\") " pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.484426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprqd\" (UniqueName: \"kubernetes.io/projected/a824b3ba-107f-4f67-bcca-690632e343c2-kube-api-access-lprqd\") pod \"frr-k8s-webhook-server-78b44bf5bb-wml4d\" (UID: \"a824b3ba-107f-4f67-bcca-690632e343c2\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.553341 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562107 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562151 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx84k\" (UniqueName: \"kubernetes.io/projected/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-kube-api-access-qx84k\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.562278 4805 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.562345 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs podName:f40bf9bc-c85c-4415-99f9-95daf9ad57ca nodeName:}" failed. No retries permitted until 2026-02-17 00:39:07.062309616 +0000 UTC m=+973.078119014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs") pod "speaker-m7ccg" (UID: "f40bf9bc-c85c-4415-99f9-95daf9ad57ca") : secret "speaker-certs-secret" not found Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-cert\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562722 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metallb-excludel2\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dm96\" (UniqueName: \"kubernetes.io/projected/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-kube-api-access-9dm96\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.562783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.562951 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.563028 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist podName:f40bf9bc-c85c-4415-99f9-95daf9ad57ca nodeName:}" failed. No retries permitted until 2026-02-17 00:39:07.063008755 +0000 UTC m=+973.078818163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist") pod "speaker-m7ccg" (UID: "f40bf9bc-c85c-4415-99f9-95daf9ad57ca") : secret "metallb-memberlist" not found Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.563062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.563128 4805 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 17 00:39:06 crc kubenswrapper[4805]: E0217 00:39:06.563156 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs podName:8d4abb9e-d062-4155-bb5d-ef34d3ddc282 nodeName:}" failed. No retries permitted until 2026-02-17 00:39:07.063148899 +0000 UTC m=+973.078958297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs") pod "controller-69bbfbf88f-mpwzw" (UID: "8d4abb9e-d062-4155-bb5d-ef34d3ddc282") : secret "controller-certs-secret" not found Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.563676 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metallb-excludel2\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.564878 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.571904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.576391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-cert\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.581717 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx84k\" (UniqueName: \"kubernetes.io/projected/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-kube-api-access-qx84k\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.582077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dm96\" (UniqueName: \"kubernetes.io/projected/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-kube-api-access-9dm96\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:06 crc kubenswrapper[4805]: I0217 00:39:06.843448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"6f65e8d333224716dec82f0ed34ca56d1b654b1c43f3be6cc0dfcc35011b50c4"} Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.020001 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d"] Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.069850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.070215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.070509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:07 crc kubenswrapper[4805]: E0217 00:39:07.070720 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 00:39:07 crc kubenswrapper[4805]: E0217 00:39:07.070974 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist podName:f40bf9bc-c85c-4415-99f9-95daf9ad57ca nodeName:}" failed. No retries permitted until 2026-02-17 00:39:08.070954586 +0000 UTC m=+974.086763994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist") pod "speaker-m7ccg" (UID: "f40bf9bc-c85c-4415-99f9-95daf9ad57ca") : secret "metallb-memberlist" not found Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.079243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-metrics-certs\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.082966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d4abb9e-d062-4155-bb5d-ef34d3ddc282-metrics-certs\") pod \"controller-69bbfbf88f-mpwzw\" (UID: \"8d4abb9e-d062-4155-bb5d-ef34d3ddc282\") " pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.266306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.782150 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-mpwzw"] Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.865593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" event={"ID":"a824b3ba-107f-4f67-bcca-690632e343c2","Type":"ContainerStarted","Data":"2db46e685b05f698e72ef9e7e9432619bec81141771c31cf293647e590c6d3f7"} Feb 17 00:39:07 crc kubenswrapper[4805]: I0217 00:39:07.869043 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mpwzw" event={"ID":"8d4abb9e-d062-4155-bb5d-ef34d3ddc282","Type":"ContainerStarted","Data":"93a145811a925127c7bbbf34b3883a8bcbe30fb2503ba6c5cf5de2e47c96b678"} Feb 17 00:39:08 crc kubenswrapper[4805]: I0217 00:39:08.093583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:08 crc kubenswrapper[4805]: E0217 00:39:08.093816 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 00:39:08 crc kubenswrapper[4805]: E0217 00:39:08.094295 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist podName:f40bf9bc-c85c-4415-99f9-95daf9ad57ca nodeName:}" failed. No retries permitted until 2026-02-17 00:39:10.094268074 +0000 UTC m=+976.110077482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist") pod "speaker-m7ccg" (UID: "f40bf9bc-c85c-4415-99f9-95daf9ad57ca") : secret "metallb-memberlist" not found Feb 17 00:39:08 crc kubenswrapper[4805]: I0217 00:39:08.877817 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mpwzw" event={"ID":"8d4abb9e-d062-4155-bb5d-ef34d3ddc282","Type":"ContainerStarted","Data":"b36b286c88f878497b88bf941ac320151bff618ecee48d16ad8bc518de6f4887"} Feb 17 00:39:08 crc kubenswrapper[4805]: I0217 00:39:08.877851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-mpwzw" event={"ID":"8d4abb9e-d062-4155-bb5d-ef34d3ddc282","Type":"ContainerStarted","Data":"87246a9d10d1178f94f16f32b3a64ec6eeabc999453c7059b059ffd91d6647c0"} Feb 17 00:39:08 crc kubenswrapper[4805]: I0217 00:39:08.877944 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:08 crc kubenswrapper[4805]: I0217 00:39:08.894917 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-mpwzw" podStartSLOduration=2.8948978 podStartE2EDuration="2.8948978s" podCreationTimestamp="2026-02-17 00:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:39:08.893609774 +0000 UTC m=+974.909419172" watchObservedRunningTime="2026-02-17 00:39:08.8948978 +0000 UTC m=+974.910707198" Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.120578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.128628 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f40bf9bc-c85c-4415-99f9-95daf9ad57ca-memberlist\") pod \"speaker-m7ccg\" (UID: \"f40bf9bc-c85c-4415-99f9-95daf9ad57ca\") " pod="metallb-system/speaker-m7ccg" Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.254478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m7ccg" Feb 17 00:39:10 crc kubenswrapper[4805]: W0217 00:39:10.274219 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf40bf9bc_c85c_4415_99f9_95daf9ad57ca.slice/crio-cf911fe23b88d7ed4b06c80b9dfbcd7aa8b85f771f1f322e24eb6e110894ee60 WatchSource:0}: Error finding container cf911fe23b88d7ed4b06c80b9dfbcd7aa8b85f771f1f322e24eb6e110894ee60: Status 404 returned error can't find the container with id cf911fe23b88d7ed4b06c80b9dfbcd7aa8b85f771f1f322e24eb6e110894ee60 Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.897201 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m7ccg" event={"ID":"f40bf9bc-c85c-4415-99f9-95daf9ad57ca","Type":"ContainerStarted","Data":"7a1269a82466d54e8c5c78ad29426ea0cb099a53e30ec2ab133ad1ac8e8182ed"} Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.897663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m7ccg" event={"ID":"f40bf9bc-c85c-4415-99f9-95daf9ad57ca","Type":"ContainerStarted","Data":"87fe7163d36009227379a8360896804287e86d27ffd84bd72331036ac3c381bf"} Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.897674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m7ccg" event={"ID":"f40bf9bc-c85c-4415-99f9-95daf9ad57ca","Type":"ContainerStarted","Data":"cf911fe23b88d7ed4b06c80b9dfbcd7aa8b85f771f1f322e24eb6e110894ee60"} Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.897900 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m7ccg" Feb 17 00:39:10 crc kubenswrapper[4805]: I0217 00:39:10.917603 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m7ccg" podStartSLOduration=4.917579622 podStartE2EDuration="4.917579622s" podCreationTimestamp="2026-02-17 00:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:39:10.913237921 +0000 UTC m=+976.929047319" watchObservedRunningTime="2026-02-17 00:39:10.917579622 +0000 UTC m=+976.933389020" Feb 17 00:39:14 crc kubenswrapper[4805]: I0217 00:39:14.953525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" event={"ID":"a824b3ba-107f-4f67-bcca-690632e343c2","Type":"ContainerStarted","Data":"5f554efe581ff2f53e3c0d72312f7ec9d83c1d9be98867349f8c7eb564594f92"} Feb 17 00:39:14 crc kubenswrapper[4805]: I0217 00:39:14.954061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:14 crc kubenswrapper[4805]: I0217 00:39:14.956473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"fd54bd217dd5957c59efeff03a7213b487b632286a09f48a1505ab36edcff23c"} Feb 17 00:39:14 crc kubenswrapper[4805]: I0217 00:39:14.975165 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" podStartSLOduration=1.268895752 podStartE2EDuration="8.975146213s" podCreationTimestamp="2026-02-17 00:39:06 +0000 UTC" firstStartedPulling="2026-02-17 00:39:07.02712794 +0000 UTC m=+973.042937378" lastFinishedPulling="2026-02-17 00:39:14.733378451 +0000 UTC m=+980.749187839" observedRunningTime="2026-02-17 00:39:14.970483864 +0000 UTC m=+980.986293262" watchObservedRunningTime="2026-02-17 00:39:14.975146213 +0000 UTC m=+980.990955611" Feb 17 00:39:15 crc kubenswrapper[4805]: I0217 00:39:15.965608 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc5ad3ec-0480-4f9f-ac09-1506aa092f49" containerID="fd54bd217dd5957c59efeff03a7213b487b632286a09f48a1505ab36edcff23c" exitCode=0 Feb 17 00:39:15 crc kubenswrapper[4805]: I0217 00:39:15.965952 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc5ad3ec-0480-4f9f-ac09-1506aa092f49" containerID="209266abb47f67725c2b0687f2c4038d9cf7c9e44fabcb20d5a8b4a1fcee1c5f" exitCode=0 Feb 17 00:39:15 crc kubenswrapper[4805]: I0217 00:39:15.965703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerDied","Data":"fd54bd217dd5957c59efeff03a7213b487b632286a09f48a1505ab36edcff23c"} Feb 17 00:39:15 crc kubenswrapper[4805]: I0217 00:39:15.966129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerDied","Data":"209266abb47f67725c2b0687f2c4038d9cf7c9e44fabcb20d5a8b4a1fcee1c5f"} Feb 17 00:39:16 crc kubenswrapper[4805]: I0217 00:39:16.974372 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc5ad3ec-0480-4f9f-ac09-1506aa092f49" containerID="77df3e3590ff5c2a06770a2ae8345c3e03794336c0cd90b135f9325649dbe67c" exitCode=0 Feb 17 00:39:16 crc kubenswrapper[4805]: I0217 00:39:16.974412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerDied","Data":"77df3e3590ff5c2a06770a2ae8345c3e03794336c0cd90b135f9325649dbe67c"} Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.269265 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-mpwzw" Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.988452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"2db278f7c67d91dd78f154cf94ab8d59ea5ca99713bce07a1658673260d971c9"} Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.988515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"45954fc3b618dba47839764cd60a0e4fadf8705b1d7e9f7e3e7e511a6bf8c0bf"} Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.988535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"a5d53ac1f2d33e72d2c48aba0280ef6a7ef6bd845487ed36da5a40bfd916fe1f"} Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.988553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"7958b6ce9913e569f306c120c195e7faefdde0308911ed149a048a58dc89b6c8"} Feb 17 00:39:17 crc kubenswrapper[4805]: I0217 00:39:17.988571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"35845e4fdaa07ff863fc47d816f31a2b052c6c0a37b8470ec2fcc0bae2c9f2ca"} Feb 17 00:39:19 crc kubenswrapper[4805]: I0217 00:39:19.018080 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8kmsb" event={"ID":"dc5ad3ec-0480-4f9f-ac09-1506aa092f49","Type":"ContainerStarted","Data":"1eddcb24dcf94fb16169e45b0d764b1c7b76aa8f9bc592e321f07433f95ee8de"} Feb 17 00:39:19 crc kubenswrapper[4805]: I0217 00:39:19.019569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:19 crc kubenswrapper[4805]: I0217 00:39:19.066891 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8kmsb" podStartSLOduration=5.059680796 podStartE2EDuration="13.066865312s" podCreationTimestamp="2026-02-17 00:39:06 +0000 UTC" firstStartedPulling="2026-02-17 00:39:06.75660511 +0000 UTC m=+972.772414508" lastFinishedPulling="2026-02-17 00:39:14.763789626 +0000 UTC m=+980.779599024" observedRunningTime="2026-02-17 00:39:19.060957398 +0000 UTC m=+985.076766836" watchObservedRunningTime="2026-02-17 00:39:19.066865312 +0000 UTC m=+985.082674740" Feb 17 00:39:20 crc kubenswrapper[4805]: I0217 00:39:20.265786 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m7ccg" Feb 17 00:39:21 crc kubenswrapper[4805]: I0217 00:39:21.554126 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:21 crc kubenswrapper[4805]: I0217 00:39:21.621733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.133483 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.135776 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.138163 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.140167 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9bw4g" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.141446 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.168772 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.248604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mlc6\" (UniqueName: \"kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6\") pod \"openstack-operator-index-nfxqx\" (UID: \"30a74938-2de2-4645-8d64-e9b604c3abe8\") " pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.350370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mlc6\" (UniqueName: \"kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6\") pod \"openstack-operator-index-nfxqx\" (UID: \"30a74938-2de2-4645-8d64-e9b604c3abe8\") " pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.367394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mlc6\" (UniqueName: \"kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6\") pod \"openstack-operator-index-nfxqx\" (UID: \"30a74938-2de2-4645-8d64-e9b604c3abe8\") " pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.470106 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:23 crc kubenswrapper[4805]: I0217 00:39:23.903032 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:23 crc kubenswrapper[4805]: W0217 00:39:23.911583 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a74938_2de2_4645_8d64_e9b604c3abe8.slice/crio-4fbfd85db3cf89db3c91e9cf225bb4203da96edf2103f2c00054ba91dc53f923 WatchSource:0}: Error finding container 4fbfd85db3cf89db3c91e9cf225bb4203da96edf2103f2c00054ba91dc53f923: Status 404 returned error can't find the container with id 4fbfd85db3cf89db3c91e9cf225bb4203da96edf2103f2c00054ba91dc53f923 Feb 17 00:39:24 crc kubenswrapper[4805]: I0217 00:39:24.074949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nfxqx" event={"ID":"30a74938-2de2-4645-8d64-e9b604c3abe8","Type":"ContainerStarted","Data":"4fbfd85db3cf89db3c91e9cf225bb4203da96edf2103f2c00054ba91dc53f923"} Feb 17 00:39:26 crc kubenswrapper[4805]: I0217 00:39:26.508240 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:26 crc kubenswrapper[4805]: I0217 00:39:26.604117 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-wml4d" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.113578 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qt465"] Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.117395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.123116 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qt465"] Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.213145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9v7k\" (UniqueName: \"kubernetes.io/projected/a080fb8f-92cc-40dd-b627-f4c04f83eace-kube-api-access-k9v7k\") pod \"openstack-operator-index-qt465\" (UID: \"a080fb8f-92cc-40dd-b627-f4c04f83eace\") " pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.314455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9v7k\" (UniqueName: \"kubernetes.io/projected/a080fb8f-92cc-40dd-b627-f4c04f83eace-kube-api-access-k9v7k\") pod \"openstack-operator-index-qt465\" (UID: \"a080fb8f-92cc-40dd-b627-f4c04f83eace\") " pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.339825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9v7k\" (UniqueName: \"kubernetes.io/projected/a080fb8f-92cc-40dd-b627-f4c04f83eace-kube-api-access-k9v7k\") pod \"openstack-operator-index-qt465\" (UID: \"a080fb8f-92cc-40dd-b627-f4c04f83eace\") " pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.440634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:27 crc kubenswrapper[4805]: I0217 00:39:27.981915 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qt465"] Feb 17 00:39:27 crc kubenswrapper[4805]: W0217 00:39:27.988650 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda080fb8f_92cc_40dd_b627_f4c04f83eace.slice/crio-a300af00884a4e9dd9c34ad3ab8ba2b699e1ac351092806eed7e1d75efdc1220 WatchSource:0}: Error finding container a300af00884a4e9dd9c34ad3ab8ba2b699e1ac351092806eed7e1d75efdc1220: Status 404 returned error can't find the container with id a300af00884a4e9dd9c34ad3ab8ba2b699e1ac351092806eed7e1d75efdc1220 Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.127591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nfxqx" event={"ID":"30a74938-2de2-4645-8d64-e9b604c3abe8","Type":"ContainerStarted","Data":"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31"} Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.127667 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nfxqx" podUID="30a74938-2de2-4645-8d64-e9b604c3abe8" containerName="registry-server" containerID="cri-o://ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31" gracePeriod=2 Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.130667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qt465" event={"ID":"a080fb8f-92cc-40dd-b627-f4c04f83eace","Type":"ContainerStarted","Data":"a300af00884a4e9dd9c34ad3ab8ba2b699e1ac351092806eed7e1d75efdc1220"} Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.151964 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nfxqx" podStartSLOduration=1.49932236 podStartE2EDuration="5.15194336s" podCreationTimestamp="2026-02-17 00:39:23 +0000 UTC" firstStartedPulling="2026-02-17 00:39:23.914075503 +0000 UTC m=+989.929884921" lastFinishedPulling="2026-02-17 00:39:27.566696533 +0000 UTC m=+993.582505921" observedRunningTime="2026-02-17 00:39:28.150985263 +0000 UTC m=+994.166794721" watchObservedRunningTime="2026-02-17 00:39:28.15194336 +0000 UTC m=+994.167752768" Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.617982 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.737473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mlc6\" (UniqueName: \"kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6\") pod \"30a74938-2de2-4645-8d64-e9b604c3abe8\" (UID: \"30a74938-2de2-4645-8d64-e9b604c3abe8\") " Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.743366 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6" (OuterVolumeSpecName: "kube-api-access-5mlc6") pod "30a74938-2de2-4645-8d64-e9b604c3abe8" (UID: "30a74938-2de2-4645-8d64-e9b604c3abe8"). InnerVolumeSpecName "kube-api-access-5mlc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:39:28 crc kubenswrapper[4805]: I0217 00:39:28.838764 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mlc6\" (UniqueName: \"kubernetes.io/projected/30a74938-2de2-4645-8d64-e9b604c3abe8-kube-api-access-5mlc6\") on node \"crc\" DevicePath \"\"" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.139672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qt465" event={"ID":"a080fb8f-92cc-40dd-b627-f4c04f83eace","Type":"ContainerStarted","Data":"638c4680218e2237f581e0c57591705f39b2e9e30a27fe76832d95a03a3c1f59"} Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.141669 4805 generic.go:334] "Generic (PLEG): container finished" podID="30a74938-2de2-4645-8d64-e9b604c3abe8" containerID="ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31" exitCode=0 Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.141723 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nfxqx" event={"ID":"30a74938-2de2-4645-8d64-e9b604c3abe8","Type":"ContainerDied","Data":"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31"} Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.141755 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nfxqx" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.141784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nfxqx" event={"ID":"30a74938-2de2-4645-8d64-e9b604c3abe8","Type":"ContainerDied","Data":"4fbfd85db3cf89db3c91e9cf225bb4203da96edf2103f2c00054ba91dc53f923"} Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.141818 4805 scope.go:117] "RemoveContainer" containerID="ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.164665 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qt465" podStartSLOduration=2.112221608 podStartE2EDuration="2.164644043s" podCreationTimestamp="2026-02-17 00:39:27 +0000 UTC" firstStartedPulling="2026-02-17 00:39:27.992884124 +0000 UTC m=+994.008693522" lastFinishedPulling="2026-02-17 00:39:28.045306569 +0000 UTC m=+994.061115957" observedRunningTime="2026-02-17 00:39:29.156548558 +0000 UTC m=+995.172357956" watchObservedRunningTime="2026-02-17 00:39:29.164644043 +0000 UTC m=+995.180453441" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.175578 4805 scope.go:117] "RemoveContainer" containerID="ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31" Feb 17 00:39:29 crc kubenswrapper[4805]: E0217 00:39:29.178774 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31\": container with ID starting with ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31 not found: ID does not exist" containerID="ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.178836 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31"} err="failed to get container status \"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31\": rpc error: code = NotFound desc = could not find container \"ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31\": container with ID starting with ad1db014a8cb24fce292c5d65729774e42b1990debe40f94e50fdba1d5e3cd31 not found: ID does not exist" Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.180846 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:29 crc kubenswrapper[4805]: I0217 00:39:29.187452 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nfxqx"] Feb 17 00:39:30 crc kubenswrapper[4805]: I0217 00:39:30.802456 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30a74938-2de2-4645-8d64-e9b604c3abe8" path="/var/lib/kubelet/pods/30a74938-2de2-4645-8d64-e9b604c3abe8/volumes" Feb 17 00:39:36 crc kubenswrapper[4805]: I0217 00:39:36.555356 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8kmsb" Feb 17 00:39:37 crc kubenswrapper[4805]: I0217 00:39:37.441060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:37 crc kubenswrapper[4805]: I0217 00:39:37.441107 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:37 crc kubenswrapper[4805]: I0217 00:39:37.484464 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:38 crc kubenswrapper[4805]: I0217 00:39:38.272513 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qt465" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.574927 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l"] Feb 17 00:39:39 crc kubenswrapper[4805]: E0217 00:39:39.578080 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a74938-2de2-4645-8d64-e9b604c3abe8" containerName="registry-server" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.578249 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a74938-2de2-4645-8d64-e9b604c3abe8" containerName="registry-server" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.578511 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a74938-2de2-4645-8d64-e9b604c3abe8" containerName="registry-server" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.579884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.582144 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-d4qxz" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.592871 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l"] Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.717549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.718240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.718394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xt8b\" (UniqueName: \"kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.820514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xt8b\" (UniqueName: \"kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.820617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.820694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.821214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.821293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.848677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xt8b\" (UniqueName: \"kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b\") pod \"4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:39 crc kubenswrapper[4805]: I0217 00:39:39.956281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:40 crc kubenswrapper[4805]: W0217 00:39:40.408137 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30cbd298_b82b_492f_ae51_31b5ddb442ec.slice/crio-707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c WatchSource:0}: Error finding container 707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c: Status 404 returned error can't find the container with id 707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c Feb 17 00:39:40 crc kubenswrapper[4805]: I0217 00:39:40.410359 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l"] Feb 17 00:39:41 crc kubenswrapper[4805]: I0217 00:39:41.242736 4805 generic.go:334] "Generic (PLEG): container finished" podID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerID="90b803e2865e235abe16bc1a4769f88701659d3551a81749eaa236147fa1cc76" exitCode=0 Feb 17 00:39:41 crc kubenswrapper[4805]: I0217 00:39:41.242799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" event={"ID":"30cbd298-b82b-492f-ae51-31b5ddb442ec","Type":"ContainerDied","Data":"90b803e2865e235abe16bc1a4769f88701659d3551a81749eaa236147fa1cc76"} Feb 17 00:39:41 crc kubenswrapper[4805]: I0217 00:39:41.244742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" event={"ID":"30cbd298-b82b-492f-ae51-31b5ddb442ec","Type":"ContainerStarted","Data":"707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c"} Feb 17 00:39:43 crc kubenswrapper[4805]: I0217 00:39:43.261992 4805 generic.go:334] "Generic (PLEG): container finished" podID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerID="2a0589f0cb4fa931906518d64c968ac54a219f9f79fd1f3e4eb07eaf7b0b5ed4" exitCode=0 Feb 17 00:39:43 crc kubenswrapper[4805]: I0217 00:39:43.262036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" event={"ID":"30cbd298-b82b-492f-ae51-31b5ddb442ec","Type":"ContainerDied","Data":"2a0589f0cb4fa931906518d64c968ac54a219f9f79fd1f3e4eb07eaf7b0b5ed4"} Feb 17 00:39:44 crc kubenswrapper[4805]: I0217 00:39:44.275014 4805 generic.go:334] "Generic (PLEG): container finished" podID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerID="86faa0e869983b5581361e709f7156c17deba06b23a5f21a814a5c8d9858ce78" exitCode=0 Feb 17 00:39:44 crc kubenswrapper[4805]: I0217 00:39:44.275093 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" event={"ID":"30cbd298-b82b-492f-ae51-31b5ddb442ec","Type":"ContainerDied","Data":"86faa0e869983b5581361e709f7156c17deba06b23a5f21a814a5c8d9858ce78"} Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.650346 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.718565 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xt8b\" (UniqueName: \"kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b\") pod \"30cbd298-b82b-492f-ae51-31b5ddb442ec\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.718668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util\") pod \"30cbd298-b82b-492f-ae51-31b5ddb442ec\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.718747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle\") pod \"30cbd298-b82b-492f-ae51-31b5ddb442ec\" (UID: \"30cbd298-b82b-492f-ae51-31b5ddb442ec\") " Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.720140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle" (OuterVolumeSpecName: "bundle") pod "30cbd298-b82b-492f-ae51-31b5ddb442ec" (UID: "30cbd298-b82b-492f-ae51-31b5ddb442ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.725992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b" (OuterVolumeSpecName: "kube-api-access-6xt8b") pod "30cbd298-b82b-492f-ae51-31b5ddb442ec" (UID: "30cbd298-b82b-492f-ae51-31b5ddb442ec"). InnerVolumeSpecName "kube-api-access-6xt8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.735034 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util" (OuterVolumeSpecName: "util") pod "30cbd298-b82b-492f-ae51-31b5ddb442ec" (UID: "30cbd298-b82b-492f-ae51-31b5ddb442ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.820828 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.820865 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30cbd298-b82b-492f-ae51-31b5ddb442ec-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:39:45 crc kubenswrapper[4805]: I0217 00:39:45.820879 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xt8b\" (UniqueName: \"kubernetes.io/projected/30cbd298-b82b-492f-ae51-31b5ddb442ec-kube-api-access-6xt8b\") on node \"crc\" DevicePath \"\"" Feb 17 00:39:46 crc kubenswrapper[4805]: I0217 00:39:46.298162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" event={"ID":"30cbd298-b82b-492f-ae51-31b5ddb442ec","Type":"ContainerDied","Data":"707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c"} Feb 17 00:39:46 crc kubenswrapper[4805]: I0217 00:39:46.298267 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707bd4440a672df74286330c3bd02082e043708be796af1764c527e93af6c61c" Feb 17 00:39:46 crc kubenswrapper[4805]: I0217 00:39:46.298287 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.888191 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m"] Feb 17 00:39:51 crc kubenswrapper[4805]: E0217 00:39:51.889092 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="pull" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.889105 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="pull" Feb 17 00:39:51 crc kubenswrapper[4805]: E0217 00:39:51.889132 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="extract" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.889139 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="extract" Feb 17 00:39:51 crc kubenswrapper[4805]: E0217 00:39:51.889156 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="util" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.889164 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="util" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.889304 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cbd298-b82b-492f-ae51-31b5ddb442ec" containerName="extract" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.889934 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:51 crc kubenswrapper[4805]: W0217 00:39:51.892376 4805 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zl5w7": failed to list *v1.Secret: secrets "openstack-operator-controller-init-dockercfg-zl5w7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Feb 17 00:39:51 crc kubenswrapper[4805]: E0217 00:39:51.892429 4805 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-init-dockercfg-zl5w7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openstack-operator-controller-init-dockercfg-zl5w7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 17 00:39:51 crc kubenswrapper[4805]: I0217 00:39:51.921216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m"] Feb 17 00:39:52 crc kubenswrapper[4805]: I0217 00:39:52.031690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nctbb\" (UniqueName: \"kubernetes.io/projected/1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855-kube-api-access-nctbb\") pod \"openstack-operator-controller-init-77b758d6b5-kkx5m\" (UID: \"1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855\") " pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:52 crc kubenswrapper[4805]: I0217 00:39:52.133414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nctbb\" (UniqueName: \"kubernetes.io/projected/1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855-kube-api-access-nctbb\") pod \"openstack-operator-controller-init-77b758d6b5-kkx5m\" (UID: \"1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855\") " pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:52 crc kubenswrapper[4805]: I0217 00:39:52.152439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nctbb\" (UniqueName: \"kubernetes.io/projected/1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855-kube-api-access-nctbb\") pod \"openstack-operator-controller-init-77b758d6b5-kkx5m\" (UID: \"1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855\") " pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:53 crc kubenswrapper[4805]: I0217 00:39:53.211167 4805 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 17 00:39:53 crc kubenswrapper[4805]: I0217 00:39:53.211253 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:53 crc kubenswrapper[4805]: I0217 00:39:53.355491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zl5w7" Feb 17 00:39:53 crc kubenswrapper[4805]: I0217 00:39:53.661930 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m"] Feb 17 00:39:54 crc kubenswrapper[4805]: I0217 00:39:54.368714 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" event={"ID":"1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855","Type":"ContainerStarted","Data":"08db76926d2b95227bb7a8d32a0fff2d2fe4f726dc71f91ee2fca5ec1d81dcdd"} Feb 17 00:39:58 crc kubenswrapper[4805]: I0217 00:39:58.403385 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" event={"ID":"1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855","Type":"ContainerStarted","Data":"b22c3928fb3c52f4ddba2014d95e3b55f8207a2ac616e6eb5ac56621bcf2dea8"} Feb 17 00:39:58 crc kubenswrapper[4805]: I0217 00:39:58.403917 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:39:58 crc kubenswrapper[4805]: I0217 00:39:58.437753 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" podStartSLOduration=3.539877716 podStartE2EDuration="7.437729893s" podCreationTimestamp="2026-02-17 00:39:51 +0000 UTC" firstStartedPulling="2026-02-17 00:39:53.669795232 +0000 UTC m=+1019.685604650" lastFinishedPulling="2026-02-17 00:39:57.567647429 +0000 UTC m=+1023.583456827" observedRunningTime="2026-02-17 00:39:58.435579634 +0000 UTC m=+1024.451389072" watchObservedRunningTime="2026-02-17 00:39:58.437729893 +0000 UTC m=+1024.453539321" Feb 17 00:40:03 crc kubenswrapper[4805]: I0217 00:40:03.214691 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-77b758d6b5-kkx5m" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.077390 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.077912 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.451934 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.452853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.460637 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nvv9v" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.471779 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.473632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.475791 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tvcq9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.491374 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.496454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.510084 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.512050 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.540283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sct7d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.574871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48gc\" (UniqueName: \"kubernetes.io/projected/d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9-kube-api-access-l48gc\") pod \"barbican-operator-controller-manager-868647ff47-vjpw9\" (UID: \"d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.574982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxkh\" (UniqueName: \"kubernetes.io/projected/cf23fb16-30b5-49d7-a204-2140b7afa8dc-kube-api-access-5qxkh\") pod \"cinder-operator-controller-manager-5d946d989d-xmfct\" (UID: \"cf23fb16-30b5-49d7-a204-2140b7afa8dc\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.575017 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9bw\" (UniqueName: \"kubernetes.io/projected/7db2d988-eae5-4cd7-9c68-b0fb971fc93b-kube-api-access-4f9bw\") pod \"designate-operator-controller-manager-6d8bf5c495-jbspb\" (UID: \"7db2d988-eae5-4cd7-9c68-b0fb971fc93b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.578080 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.590438 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.591449 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.596210 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-z7j69" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.596491 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.597578 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.600535 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kq955" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.612830 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.632939 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.633879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.649751 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6hrjc" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.655455 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.656403 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.661582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.671262 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pdrrq" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.671459 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.682908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9bw\" (UniqueName: \"kubernetes.io/projected/7db2d988-eae5-4cd7-9c68-b0fb971fc93b-kube-api-access-4f9bw\") pod \"designate-operator-controller-manager-6d8bf5c495-jbspb\" (UID: \"7db2d988-eae5-4cd7-9c68-b0fb971fc93b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.682989 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dn7l\" (UniqueName: \"kubernetes.io/projected/5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2-kube-api-access-9dn7l\") pod \"glance-operator-controller-manager-77987464f4-9kk4z\" (UID: \"5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.683048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48gc\" (UniqueName: \"kubernetes.io/projected/d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9-kube-api-access-l48gc\") pod \"barbican-operator-controller-manager-868647ff47-vjpw9\" (UID: \"d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.683104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmcw\" (UniqueName: \"kubernetes.io/projected/1eea0362-7f54-47ba-9669-c561ebcfd69d-kube-api-access-dsmcw\") pod \"heat-operator-controller-manager-69f49c598c-8zbbf\" (UID: \"1eea0362-7f54-47ba-9669-c561ebcfd69d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.683175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxkh\" (UniqueName: \"kubernetes.io/projected/cf23fb16-30b5-49d7-a204-2140b7afa8dc-kube-api-access-5qxkh\") pod \"cinder-operator-controller-manager-5d946d989d-xmfct\" (UID: \"cf23fb16-30b5-49d7-a204-2140b7afa8dc\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.685867 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.698675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.718509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48gc\" (UniqueName: \"kubernetes.io/projected/d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9-kube-api-access-l48gc\") pod \"barbican-operator-controller-manager-868647ff47-vjpw9\" (UID: \"d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.719538 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.720594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.724630 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-762lp" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.724842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxkh\" (UniqueName: \"kubernetes.io/projected/cf23fb16-30b5-49d7-a204-2140b7afa8dc-kube-api-access-5qxkh\") pod \"cinder-operator-controller-manager-5d946d989d-xmfct\" (UID: \"cf23fb16-30b5-49d7-a204-2140b7afa8dc\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.727549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9bw\" (UniqueName: \"kubernetes.io/projected/7db2d988-eae5-4cd7-9c68-b0fb971fc93b-kube-api-access-4f9bw\") pod \"designate-operator-controller-manager-6d8bf5c495-jbspb\" (UID: \"7db2d988-eae5-4cd7-9c68-b0fb971fc93b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.727615 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.739382 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.740380 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.743341 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gphll" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.756175 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.773031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.776575 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.784774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dn7l\" (UniqueName: \"kubernetes.io/projected/5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2-kube-api-access-9dn7l\") pod \"glance-operator-controller-manager-77987464f4-9kk4z\" (UID: \"5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787239 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdq5m\" (UniqueName: \"kubernetes.io/projected/797181b9-d095-42dc-9bf6-f87665ba40c5-kube-api-access-tdq5m\") pod \"ironic-operator-controller-manager-554564d7fc-q8nbq\" (UID: \"797181b9-d095-42dc-9bf6-f87665ba40c5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787274 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qcbz\" (UniqueName: \"kubernetes.io/projected/97c634de-ffb7-4340-b622-782ee351de54-kube-api-access-5qcbz\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787333 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjm9m\" (UniqueName: \"kubernetes.io/projected/92f8fa10-b559-4065-bdc5-1bd1b6b89b22-kube-api-access-xjm9m\") pod \"horizon-operator-controller-manager-5b9b8895d5-7w28d\" (UID: \"92f8fa10-b559-4065-bdc5-1bd1b6b89b22\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.787357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsmcw\" (UniqueName: \"kubernetes.io/projected/1eea0362-7f54-47ba-9669-c561ebcfd69d-kube-api-access-dsmcw\") pod \"heat-operator-controller-manager-69f49c598c-8zbbf\" (UID: \"1eea0362-7f54-47ba-9669-c561ebcfd69d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.793139 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.794028 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hj4pb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.795136 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.796195 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.809981 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-499q4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.843619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.857884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsmcw\" (UniqueName: \"kubernetes.io/projected/1eea0362-7f54-47ba-9669-c561ebcfd69d-kube-api-access-dsmcw\") pod \"heat-operator-controller-manager-69f49c598c-8zbbf\" (UID: \"1eea0362-7f54-47ba-9669-c561ebcfd69d\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.864455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.874859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dn7l\" (UniqueName: \"kubernetes.io/projected/5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2-kube-api-access-9dn7l\") pod \"glance-operator-controller-manager-77987464f4-9kk4z\" (UID: \"5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qcbz\" (UniqueName: \"kubernetes.io/projected/97c634de-ffb7-4340-b622-782ee351de54-kube-api-access-5qcbz\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888685 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclbm\" (UniqueName: \"kubernetes.io/projected/1fa270c7-9d09-444c-9ccd-70febd3fc194-kube-api-access-mclbm\") pod \"keystone-operator-controller-manager-b4d948c87-pmmsh\" (UID: \"1fa270c7-9d09-444c-9ccd-70febd3fc194\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjm9m\" (UniqueName: \"kubernetes.io/projected/92f8fa10-b559-4065-bdc5-1bd1b6b89b22-kube-api-access-xjm9m\") pod \"horizon-operator-controller-manager-5b9b8895d5-7w28d\" (UID: \"92f8fa10-b559-4065-bdc5-1bd1b6b89b22\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjsv5\" (UniqueName: \"kubernetes.io/projected/13981a34-157a-433a-bb3b-5ec086dc6506-kube-api-access-fjsv5\") pod \"mariadb-operator-controller-manager-6994f66f48-n26v4\" (UID: \"13981a34-157a-433a-bb3b-5ec086dc6506\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kms8g\" (UniqueName: \"kubernetes.io/projected/63f821ff-0cb4-4722-87df-511e1758288e-kube-api-access-kms8g\") pod \"manila-operator-controller-manager-54f6768c69-sbwq4\" (UID: \"63f821ff-0cb4-4722-87df-511e1758288e\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.888964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdq5m\" (UniqueName: \"kubernetes.io/projected/797181b9-d095-42dc-9bf6-f87665ba40c5-kube-api-access-tdq5m\") pod \"ironic-operator-controller-manager-554564d7fc-q8nbq\" (UID: \"797181b9-d095-42dc-9bf6-f87665ba40c5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:23 crc kubenswrapper[4805]: E0217 00:40:23.889794 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:23 crc kubenswrapper[4805]: E0217 00:40:23.889843 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:24.38981959 +0000 UTC m=+1050.405628988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.912970 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.916785 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjm9m\" (UniqueName: \"kubernetes.io/projected/92f8fa10-b559-4065-bdc5-1bd1b6b89b22-kube-api-access-xjm9m\") pod \"horizon-operator-controller-manager-5b9b8895d5-7w28d\" (UID: \"92f8fa10-b559-4065-bdc5-1bd1b6b89b22\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.919108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.937615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.938206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qcbz\" (UniqueName: \"kubernetes.io/projected/97c634de-ffb7-4340-b622-782ee351de54-kube-api-access-5qcbz\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.938275 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp"] Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.939155 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.945065 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdq5m\" (UniqueName: \"kubernetes.io/projected/797181b9-d095-42dc-9bf6-f87665ba40c5-kube-api-access-tdq5m\") pod \"ironic-operator-controller-manager-554564d7fc-q8nbq\" (UID: \"797181b9-d095-42dc-9bf6-f87665ba40c5\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.954894 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rx8lw" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.955300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.963894 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.992481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclbm\" (UniqueName: \"kubernetes.io/projected/1fa270c7-9d09-444c-9ccd-70febd3fc194-kube-api-access-mclbm\") pod \"keystone-operator-controller-manager-b4d948c87-pmmsh\" (UID: \"1fa270c7-9d09-444c-9ccd-70febd3fc194\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.992523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjsv5\" (UniqueName: \"kubernetes.io/projected/13981a34-157a-433a-bb3b-5ec086dc6506-kube-api-access-fjsv5\") pod \"mariadb-operator-controller-manager-6994f66f48-n26v4\" (UID: \"13981a34-157a-433a-bb3b-5ec086dc6506\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:23 crc kubenswrapper[4805]: I0217 00:40:23.992554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kms8g\" (UniqueName: \"kubernetes.io/projected/63f821ff-0cb4-4722-87df-511e1758288e-kube-api-access-kms8g\") pod \"manila-operator-controller-manager-54f6768c69-sbwq4\" (UID: \"63f821ff-0cb4-4722-87df-511e1758288e\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.013708 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-klb75"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.014703 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.020826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v6r47" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.024976 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.036151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kms8g\" (UniqueName: \"kubernetes.io/projected/63f821ff-0cb4-4722-87df-511e1758288e-kube-api-access-kms8g\") pod \"manila-operator-controller-manager-54f6768c69-sbwq4\" (UID: \"63f821ff-0cb4-4722-87df-511e1758288e\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.036883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclbm\" (UniqueName: \"kubernetes.io/projected/1fa270c7-9d09-444c-9ccd-70febd3fc194-kube-api-access-mclbm\") pod \"keystone-operator-controller-manager-b4d948c87-pmmsh\" (UID: \"1fa270c7-9d09-444c-9ccd-70febd3fc194\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.040675 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjsv5\" (UniqueName: \"kubernetes.io/projected/13981a34-157a-433a-bb3b-5ec086dc6506-kube-api-access-fjsv5\") pod \"mariadb-operator-controller-manager-6994f66f48-n26v4\" (UID: \"13981a34-157a-433a-bb3b-5ec086dc6506\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.051907 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.052940 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.060655 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cvl6k" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.061137 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-klb75"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.061477 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.064139 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.080416 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.081375 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.085256 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4xgsf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.085447 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.106340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxln\" (UniqueName: \"kubernetes.io/projected/da0ffea9-23b4-41d5-b3db-8d76372c949d-kube-api-access-8sxln\") pod \"neutron-operator-controller-manager-64ddbf8bb-kdndp\" (UID: \"da0ffea9-23b4-41d5-b3db-8d76372c949d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.106558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfbk\" (UniqueName: \"kubernetes.io/projected/69a2b32d-8ef4-4bcf-a048-d169e9577f38-kube-api-access-5xfbk\") pod \"nova-operator-controller-manager-567668f5cf-klb75\" (UID: \"69a2b32d-8ef4-4bcf-a048-d169e9577f38\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.181541 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.187152 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.190889 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q6hh5" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.210486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjvj9\" (UniqueName: \"kubernetes.io/projected/bee5466c-cf0f-4af9-8c9f-f323e814d02d-kube-api-access-zjvj9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.210551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.210588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxln\" (UniqueName: \"kubernetes.io/projected/da0ffea9-23b4-41d5-b3db-8d76372c949d-kube-api-access-8sxln\") pod \"neutron-operator-controller-manager-64ddbf8bb-kdndp\" (UID: \"da0ffea9-23b4-41d5-b3db-8d76372c949d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.210674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qr7l\" (UniqueName: \"kubernetes.io/projected/fa1c6038-a220-4d79-8d11-97d0dbbb4b38-kube-api-access-9qr7l\") pod \"octavia-operator-controller-manager-69f8888797-q9dgv\" (UID: \"fa1c6038-a220-4d79-8d11-97d0dbbb4b38\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.210705 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xfbk\" (UniqueName: \"kubernetes.io/projected/69a2b32d-8ef4-4bcf-a048-d169e9577f38-kube-api-access-5xfbk\") pod \"nova-operator-controller-manager-567668f5cf-klb75\" (UID: \"69a2b32d-8ef4-4bcf-a048-d169e9577f38\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.226293 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.239044 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xfbk\" (UniqueName: \"kubernetes.io/projected/69a2b32d-8ef4-4bcf-a048-d169e9577f38-kube-api-access-5xfbk\") pod \"nova-operator-controller-manager-567668f5cf-klb75\" (UID: \"69a2b32d-8ef4-4bcf-a048-d169e9577f38\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.248748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxln\" (UniqueName: \"kubernetes.io/projected/da0ffea9-23b4-41d5-b3db-8d76372c949d-kube-api-access-8sxln\") pod \"neutron-operator-controller-manager-64ddbf8bb-kdndp\" (UID: \"da0ffea9-23b4-41d5-b3db-8d76372c949d\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.250057 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.254269 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.265885 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xcc97" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.275552 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.281936 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.307773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.312082 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c76j\" (UniqueName: \"kubernetes.io/projected/46c67b9e-b2a0-4de9-9ecd-581c646896fe-kube-api-access-9c76j\") pod \"placement-operator-controller-manager-8497b45c89-rwt67\" (UID: \"46c67b9e-b2a0-4de9-9ecd-581c646896fe\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.312158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qr7l\" (UniqueName: \"kubernetes.io/projected/fa1c6038-a220-4d79-8d11-97d0dbbb4b38-kube-api-access-9qr7l\") pod \"octavia-operator-controller-manager-69f8888797-q9dgv\" (UID: \"fa1c6038-a220-4d79-8d11-97d0dbbb4b38\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.312205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjvj9\" (UniqueName: \"kubernetes.io/projected/bee5466c-cf0f-4af9-8c9f-f323e814d02d-kube-api-access-zjvj9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.312249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.312373 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.312436 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:24.812415752 +0000 UTC m=+1050.828225160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.327116 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-prh9h"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.328521 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.331883 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.333125 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rpmz7" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.333726 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qr7l\" (UniqueName: \"kubernetes.io/projected/fa1c6038-a220-4d79-8d11-97d0dbbb4b38-kube-api-access-9qr7l\") pod \"octavia-operator-controller-manager-69f8888797-q9dgv\" (UID: \"fa1c6038-a220-4d79-8d11-97d0dbbb4b38\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.336622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjvj9\" (UniqueName: \"kubernetes.io/projected/bee5466c-cf0f-4af9-8c9f-f323e814d02d-kube-api-access-zjvj9\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.339144 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-prh9h"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.344984 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.346137 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.347598 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m58cq" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.352756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.364490 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-q8f27"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.383037 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-q8f27"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.383139 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.392695 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.393434 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-d4j5t" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.413270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.413792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjv6\" (UniqueName: \"kubernetes.io/projected/8d4c5113-e984-4b0c-b1c2-45b31750d654-kube-api-access-7pjv6\") pod \"ovn-operator-controller-manager-d44cf6b75-h4vnc\" (UID: \"8d4c5113-e984-4b0c-b1c2-45b31750d654\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.413938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c76j\" (UniqueName: \"kubernetes.io/projected/46c67b9e-b2a0-4de9-9ecd-581c646896fe-kube-api-access-9c76j\") pod \"placement-operator-controller-manager-8497b45c89-rwt67\" (UID: \"46c67b9e-b2a0-4de9-9ecd-581c646896fe\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.414436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp7q\" (UniqueName: \"kubernetes.io/projected/2f94b9ee-0d59-4dfc-8a01-c506d368327f-kube-api-access-kpp7q\") pod \"swift-operator-controller-manager-68f46476f-prh9h\" (UID: \"2f94b9ee-0d59-4dfc-8a01-c506d368327f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.414562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p426c\" (UniqueName: \"kubernetes.io/projected/b83736b0-6ae8-4fc4-ab02-f731ce083723-kube-api-access-p426c\") pod \"telemetry-operator-controller-manager-6bf489ffd7-pw66z\" (UID: \"b83736b0-6ae8-4fc4-ab02-f731ce083723\") " pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.413954 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.414818 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:25.414798124 +0000 UTC m=+1051.430607522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.415334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.423137 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.426535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.430018 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-55rvj" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.430685 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.444799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c76j\" (UniqueName: \"kubernetes.io/projected/46c67b9e-b2a0-4de9-9ecd-581c646896fe-kube-api-access-9c76j\") pod \"placement-operator-controller-manager-8497b45c89-rwt67\" (UID: \"46c67b9e-b2a0-4de9-9ecd-581c646896fe\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.463767 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.464838 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.469898 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.469971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.470084 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tsxhd" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.472929 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.475355 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.480853 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.481978 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.487802 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pvbtt" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.495905 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.515640 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pjv6\" (UniqueName: \"kubernetes.io/projected/8d4c5113-e984-4b0c-b1c2-45b31750d654-kube-api-access-7pjv6\") pod \"ovn-operator-controller-manager-d44cf6b75-h4vnc\" (UID: \"8d4c5113-e984-4b0c-b1c2-45b31750d654\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.515718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfsv\" (UniqueName: \"kubernetes.io/projected/e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe-kube-api-access-tgfsv\") pod \"test-operator-controller-manager-7866795846-q8f27\" (UID: \"e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.515776 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpp7q\" (UniqueName: \"kubernetes.io/projected/2f94b9ee-0d59-4dfc-8a01-c506d368327f-kube-api-access-kpp7q\") pod \"swift-operator-controller-manager-68f46476f-prh9h\" (UID: \"2f94b9ee-0d59-4dfc-8a01-c506d368327f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.515817 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p426c\" (UniqueName: \"kubernetes.io/projected/b83736b0-6ae8-4fc4-ab02-f731ce083723-kube-api-access-p426c\") pod \"telemetry-operator-controller-manager-6bf489ffd7-pw66z\" (UID: \"b83736b0-6ae8-4fc4-ab02-f731ce083723\") " pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.515856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzldr\" (UniqueName: \"kubernetes.io/projected/807e772b-99f0-4578-b462-14b359040c87-kube-api-access-dzldr\") pod \"watcher-operator-controller-manager-5db88f68c-6phwc\" (UID: \"807e772b-99f0-4578-b462-14b359040c87\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.534730 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p426c\" (UniqueName: \"kubernetes.io/projected/b83736b0-6ae8-4fc4-ab02-f731ce083723-kube-api-access-p426c\") pod \"telemetry-operator-controller-manager-6bf489ffd7-pw66z\" (UID: \"b83736b0-6ae8-4fc4-ab02-f731ce083723\") " pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.540982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpp7q\" (UniqueName: \"kubernetes.io/projected/2f94b9ee-0d59-4dfc-8a01-c506d368327f-kube-api-access-kpp7q\") pod \"swift-operator-controller-manager-68f46476f-prh9h\" (UID: \"2f94b9ee-0d59-4dfc-8a01-c506d368327f\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.545006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pjv6\" (UniqueName: \"kubernetes.io/projected/8d4c5113-e984-4b0c-b1c2-45b31750d654-kube-api-access-7pjv6\") pod \"ovn-operator-controller-manager-d44cf6b75-h4vnc\" (UID: \"8d4c5113-e984-4b0c-b1c2-45b31750d654\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.564495 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.564836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.590774 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.617649 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkj9\" (UniqueName: \"kubernetes.io/projected/ad57ab8f-521c-44a5-b5d5-22264e6a79b0-kube-api-access-khkj9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wkvdc\" (UID: \"ad57ab8f-521c-44a5-b5d5-22264e6a79b0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.617690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.617738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzldr\" (UniqueName: \"kubernetes.io/projected/807e772b-99f0-4578-b462-14b359040c87-kube-api-access-dzldr\") pod \"watcher-operator-controller-manager-5db88f68c-6phwc\" (UID: \"807e772b-99f0-4578-b462-14b359040c87\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.617762 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22glp\" (UniqueName: \"kubernetes.io/projected/ed86b6a0-d091-482b-8bdb-0d0ae3153733-kube-api-access-22glp\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.618036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfsv\" (UniqueName: \"kubernetes.io/projected/e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe-kube-api-access-tgfsv\") pod \"test-operator-controller-manager-7866795846-q8f27\" (UID: \"e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.618081 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.631947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" event={"ID":"d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9","Type":"ContainerStarted","Data":"d224d33b111c279e3cdecaa2550e9f45fe46f3824c8e7f98ba53e85a1cdbd885"} Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.634181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfsv\" (UniqueName: \"kubernetes.io/projected/e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe-kube-api-access-tgfsv\") pod \"test-operator-controller-manager-7866795846-q8f27\" (UID: \"e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.635832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzldr\" (UniqueName: \"kubernetes.io/projected/807e772b-99f0-4578-b462-14b359040c87-kube-api-access-dzldr\") pod \"watcher-operator-controller-manager-5db88f68c-6phwc\" (UID: \"807e772b-99f0-4578-b462-14b359040c87\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.675981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.692964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.712130 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.722557 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct"] Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.724318 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.724415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkj9\" (UniqueName: \"kubernetes.io/projected/ad57ab8f-521c-44a5-b5d5-22264e6a79b0-kube-api-access-khkj9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wkvdc\" (UID: \"ad57ab8f-521c-44a5-b5d5-22264e6a79b0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.724435 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.724467 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.724538 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:25.224519982 +0000 UTC m=+1051.240329380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.724483 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22glp\" (UniqueName: \"kubernetes.io/projected/ed86b6a0-d091-482b-8bdb-0d0ae3153733-kube-api-access-22glp\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.724930 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.725043 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:25.225026176 +0000 UTC m=+1051.240835664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "metrics-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.740909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22glp\" (UniqueName: \"kubernetes.io/projected/ed86b6a0-d091-482b-8bdb-0d0ae3153733-kube-api-access-22glp\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.743721 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkj9\" (UniqueName: \"kubernetes.io/projected/ad57ab8f-521c-44a5-b5d5-22264e6a79b0-kube-api-access-khkj9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wkvdc\" (UID: \"ad57ab8f-521c-44a5-b5d5-22264e6a79b0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.775152 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.827433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.827680 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: E0217 00:40:24.827768 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:25.827721207 +0000 UTC m=+1051.843530605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:24 crc kubenswrapper[4805]: I0217 00:40:24.830734 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.035787 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.069728 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.082810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.088085 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.096780 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.109124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.236689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.237018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.237214 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.237287 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.237387 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:26.237362949 +0000 UTC m=+1052.253172377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.237578 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:26.237514823 +0000 UTC m=+1052.253324231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "metrics-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.305806 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.320274 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.339048 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.349766 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv"] Feb 17 00:40:25 crc kubenswrapper[4805]: W0217 00:40:25.353123 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa1c6038_a220_4d79_8d11_97d0dbbb4b38.slice/crio-5c69a6d324b0234cc05a47f6b35bbeaaf4a3bcffbaaf89bb8d3d877fb182710c WatchSource:0}: Error finding container 5c69a6d324b0234cc05a47f6b35bbeaaf4a3bcffbaaf89bb8d3d877fb182710c: Status 404 returned error can't find the container with id 5c69a6d324b0234cc05a47f6b35bbeaaf4a3bcffbaaf89bb8d3d877fb182710c Feb 17 00:40:25 crc kubenswrapper[4805]: W0217 00:40:25.355628 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c67b9e_b2a0_4de9_9ecd_581c646896fe.slice/crio-51c05968aa0b4e8c300336f1302a50c2e7027fa0fc060a0830464451cd386c32 WatchSource:0}: Error finding container 51c05968aa0b4e8c300336f1302a50c2e7027fa0fc060a0830464451cd386c32: Status 404 returned error can't find the container with id 51c05968aa0b4e8c300336f1302a50c2e7027fa0fc060a0830464451cd386c32 Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.439755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.439984 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.440034 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:27.440020325 +0000 UTC m=+1053.455829723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.526435 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp"] Feb 17 00:40:25 crc kubenswrapper[4805]: W0217 00:40:25.535188 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d4c5113_e984_4b0c_b1c2_45b31750d654.slice/crio-26b1cb4003df01aa6eb89428c7f303836d27b28eeed21cce0da46aa4dcbe3c7c WatchSource:0}: Error finding container 26b1cb4003df01aa6eb89428c7f303836d27b28eeed21cce0da46aa4dcbe3c7c: Status 404 returned error can't find the container with id 26b1cb4003df01aa6eb89428c7f303836d27b28eeed21cce0da46aa4dcbe3c7c Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.536642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-q8f27"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.547490 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.554249 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.560574 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.566674 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-klb75"] Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.571733 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-prh9h"] Feb 17 00:40:25 crc kubenswrapper[4805]: W0217 00:40:25.574487 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a2b32d_8ef4_4bcf_a048_d169e9577f38.slice/crio-8022fd80564197711fa1cec0d90efccb3c71d6040fad284350f6599a11d46b8d WatchSource:0}: Error finding container 8022fd80564197711fa1cec0d90efccb3c71d6040fad284350f6599a11d46b8d: Status 404 returned error can't find the container with id 8022fd80564197711fa1cec0d90efccb3c71d6040fad284350f6599a11d46b8d Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.576231 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc"] Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.589629 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xfbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-klb75_openstack-operators(69a2b32d-8ef4-4bcf-a048-d169e9577f38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.590877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" podUID="69a2b32d-8ef4-4bcf-a048-d169e9577f38" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.592808 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.196:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p426c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6bf489ffd7-pw66z_openstack-operators(b83736b0-6ae8-4fc4-ab02-f731ce083723): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.593927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" podUID="b83736b0-6ae8-4fc4-ab02-f731ce083723" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.594339 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpp7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-prh9h_openstack-operators(2f94b9ee-0d59-4dfc-8a01-c506d368327f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.596034 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" podUID="2f94b9ee-0d59-4dfc-8a01-c506d368327f" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.600824 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khkj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wkvdc_openstack-operators(ad57ab8f-521c-44a5-b5d5-22264e6a79b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.601980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" podUID="ad57ab8f-521c-44a5-b5d5-22264e6a79b0" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.603395 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzldr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-6phwc_openstack-operators(807e772b-99f0-4578-b462-14b359040c87): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.604563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" podUID="807e772b-99f0-4578-b462-14b359040c87" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.608450 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8sxln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-kdndp_openstack-operators(da0ffea9-23b4-41d5-b3db-8d76372c949d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.610159 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" podUID="da0ffea9-23b4-41d5-b3db-8d76372c949d" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.643058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" event={"ID":"1fa270c7-9d09-444c-9ccd-70febd3fc194","Type":"ContainerStarted","Data":"64799231a3bef858949d844815471ad7f572b4d2daa8eb134953de929fc352e5"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.644836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" event={"ID":"13981a34-157a-433a-bb3b-5ec086dc6506","Type":"ContainerStarted","Data":"a2b7fc985283a3a67429160c197c509e8f6cb4e105d02f3ffd906b7f3a128bcb"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.645745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" event={"ID":"2f94b9ee-0d59-4dfc-8a01-c506d368327f","Type":"ContainerStarted","Data":"1fcd7aa9d81e12e1365e787176a840a5da6fa7dbd3cdbedac615a085bd4359ed"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.646982 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" podUID="2f94b9ee-0d59-4dfc-8a01-c506d368327f" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.648139 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" event={"ID":"69a2b32d-8ef4-4bcf-a048-d169e9577f38","Type":"ContainerStarted","Data":"8022fd80564197711fa1cec0d90efccb3c71d6040fad284350f6599a11d46b8d"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.651247 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" event={"ID":"5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2","Type":"ContainerStarted","Data":"c421aa744f4f7554037153f68ddfd3ea6dcad5dfe6cee2a0815ab0eba73102cd"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.652809 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" podUID="69a2b32d-8ef4-4bcf-a048-d169e9577f38" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.652980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" event={"ID":"797181b9-d095-42dc-9bf6-f87665ba40c5","Type":"ContainerStarted","Data":"7542a5dd48441caa8838055921b38e81a58ebd7b763c31a2a336d11590ef5f99"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.656089 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" event={"ID":"8d4c5113-e984-4b0c-b1c2-45b31750d654","Type":"ContainerStarted","Data":"26b1cb4003df01aa6eb89428c7f303836d27b28eeed21cce0da46aa4dcbe3c7c"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.666022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" event={"ID":"e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe","Type":"ContainerStarted","Data":"3a7c3e9db1fdf7192fbb7b64929d5b71df003ebe603a5abd85ee9a7c8b4596bf"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.669076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" event={"ID":"b83736b0-6ae8-4fc4-ab02-f731ce083723","Type":"ContainerStarted","Data":"1abcd6034db9cdf5f7816295a463b48ff573490b68ad8683e6ffe3f4a280c268"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.670515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" event={"ID":"ad57ab8f-521c-44a5-b5d5-22264e6a79b0","Type":"ContainerStarted","Data":"1d58e59a86bcfdd38a6f911c53bae814c8d4330b06dbb41e55fd6ba8de179013"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.671244 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" podUID="b83736b0-6ae8-4fc4-ab02-f731ce083723" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.671421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" event={"ID":"92f8fa10-b559-4065-bdc5-1bd1b6b89b22","Type":"ContainerStarted","Data":"c16572b1fd3467dbe67bd11cfaf8b0c66b1d8aa5ed7069df82ef2b7b0bed938f"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.671501 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" podUID="ad57ab8f-521c-44a5-b5d5-22264e6a79b0" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.672919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" event={"ID":"cf23fb16-30b5-49d7-a204-2140b7afa8dc","Type":"ContainerStarted","Data":"4678efe166bc32718c00fca0795711cdf81579d311946e1ff4dfa0e7a5b83e9e"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.684595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" event={"ID":"807e772b-99f0-4578-b462-14b359040c87","Type":"ContainerStarted","Data":"3079e124cff36bf134c3dfc24d304e383746ad58497778251efb42953c5292da"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.686242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" podUID="807e772b-99f0-4578-b462-14b359040c87" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.687518 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" event={"ID":"7db2d988-eae5-4cd7-9c68-b0fb971fc93b","Type":"ContainerStarted","Data":"1bb8ef305f422f9714a6b1cfda446ec39f3da4685244461c9dce1fb63d424a90"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.690555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" event={"ID":"da0ffea9-23b4-41d5-b3db-8d76372c949d","Type":"ContainerStarted","Data":"6d59771257fadb7ea7ae1f9f9d4f3fd989f865052c8e41fd24b68820b7dca7b6"} Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.693411 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" podUID="da0ffea9-23b4-41d5-b3db-8d76372c949d" Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.698297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" event={"ID":"63f821ff-0cb4-4722-87df-511e1758288e","Type":"ContainerStarted","Data":"8af2b18ea7f3d082d4dba73ebe3b7dfe987f0f2ef5a63ac3476134bff280ee5e"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.704144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" event={"ID":"46c67b9e-b2a0-4de9-9ecd-581c646896fe","Type":"ContainerStarted","Data":"51c05968aa0b4e8c300336f1302a50c2e7027fa0fc060a0830464451cd386c32"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.706657 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" event={"ID":"fa1c6038-a220-4d79-8d11-97d0dbbb4b38","Type":"ContainerStarted","Data":"5c69a6d324b0234cc05a47f6b35bbeaaf4a3bcffbaaf89bb8d3d877fb182710c"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.708333 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" event={"ID":"1eea0362-7f54-47ba-9669-c561ebcfd69d","Type":"ContainerStarted","Data":"2bc0b39a34702a3926914fbcfe400dab552fad23393963b38d62e72bac14b3ca"} Feb 17 00:40:25 crc kubenswrapper[4805]: I0217 00:40:25.847886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.849690 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:25 crc kubenswrapper[4805]: E0217 00:40:25.849824 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:27.849793411 +0000 UTC m=+1053.865602809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:26 crc kubenswrapper[4805]: I0217 00:40:26.255818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:26 crc kubenswrapper[4805]: I0217 00:40:26.255908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.255999 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.256073 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:28.256054779 +0000 UTC m=+1054.271864177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.256112 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.256195 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:28.256177312 +0000 UTC m=+1054.271986710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "metrics-server-cert" not found Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.716628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" podUID="69a2b32d-8ef4-4bcf-a048-d169e9577f38" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.717100 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" podUID="2f94b9ee-0d59-4dfc-8a01-c506d368327f" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.717145 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" podUID="ad57ab8f-521c-44a5-b5d5-22264e6a79b0" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.717177 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.196:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" podUID="b83736b0-6ae8-4fc4-ab02-f731ce083723" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.717213 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" podUID="da0ffea9-23b4-41d5-b3db-8d76372c949d" Feb 17 00:40:26 crc kubenswrapper[4805]: E0217 00:40:26.717242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" podUID="807e772b-99f0-4578-b462-14b359040c87" Feb 17 00:40:27 crc kubenswrapper[4805]: I0217 00:40:27.483125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:27 crc kubenswrapper[4805]: E0217 00:40:27.483424 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:27 crc kubenswrapper[4805]: E0217 00:40:27.483476 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:31.483459772 +0000 UTC m=+1057.499269170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:27 crc kubenswrapper[4805]: I0217 00:40:27.890101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:27 crc kubenswrapper[4805]: E0217 00:40:27.890273 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:27 crc kubenswrapper[4805]: E0217 00:40:27.892960 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:31.892920169 +0000 UTC m=+1057.908729577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:28 crc kubenswrapper[4805]: I0217 00:40:28.296262 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:28 crc kubenswrapper[4805]: I0217 00:40:28.296478 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:28 crc kubenswrapper[4805]: E0217 00:40:28.296493 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 00:40:28 crc kubenswrapper[4805]: E0217 00:40:28.296578 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:32.296556954 +0000 UTC m=+1058.312366462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "metrics-server-cert" not found Feb 17 00:40:28 crc kubenswrapper[4805]: E0217 00:40:28.296609 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:28 crc kubenswrapper[4805]: E0217 00:40:28.296682 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:32.296664347 +0000 UTC m=+1058.312473745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:31 crc kubenswrapper[4805]: I0217 00:40:31.562158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:31 crc kubenswrapper[4805]: E0217 00:40:31.562357 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:31 crc kubenswrapper[4805]: E0217 00:40:31.563055 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:39.563031104 +0000 UTC m=+1065.578840592 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:31 crc kubenswrapper[4805]: I0217 00:40:31.970237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:31 crc kubenswrapper[4805]: E0217 00:40:31.970578 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:31 crc kubenswrapper[4805]: E0217 00:40:31.970633 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:39.970617129 +0000 UTC m=+1065.986426537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:32 crc kubenswrapper[4805]: I0217 00:40:32.379475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:32 crc kubenswrapper[4805]: I0217 00:40:32.379586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:32 crc kubenswrapper[4805]: E0217 00:40:32.379624 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:32 crc kubenswrapper[4805]: E0217 00:40:32.379681 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:40.379667704 +0000 UTC m=+1066.395477102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:32 crc kubenswrapper[4805]: E0217 00:40:32.379775 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 00:40:32 crc kubenswrapper[4805]: E0217 00:40:32.379903 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:40.37987232 +0000 UTC m=+1066.395681808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "metrics-server-cert" not found Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.793973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" event={"ID":"92f8fa10-b559-4065-bdc5-1bd1b6b89b22","Type":"ContainerStarted","Data":"406cbac85123fcb9fe839c62b6cf2b253c03f3eefe057de7231b2d9fcdd2740b"} Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.795287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.797539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" event={"ID":"7db2d988-eae5-4cd7-9c68-b0fb971fc93b","Type":"ContainerStarted","Data":"476e177f15dbc404578d6b56a9183d0cb31970efa0ab71de41a0bf464c6c319e"} Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.797847 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.816649 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" podStartSLOduration=2.603248167 podStartE2EDuration="14.816626379s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.080511705 +0000 UTC m=+1051.096321103" lastFinishedPulling="2026-02-17 00:40:37.293889917 +0000 UTC m=+1063.309699315" observedRunningTime="2026-02-17 00:40:37.811848876 +0000 UTC m=+1063.827658294" watchObservedRunningTime="2026-02-17 00:40:37.816626379 +0000 UTC m=+1063.832435777" Feb 17 00:40:37 crc kubenswrapper[4805]: I0217 00:40:37.845240 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" podStartSLOduration=2.654972002 podStartE2EDuration="14.845217172s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.103708309 +0000 UTC m=+1051.119517697" lastFinishedPulling="2026-02-17 00:40:37.293953469 +0000 UTC m=+1063.309762867" observedRunningTime="2026-02-17 00:40:37.831358118 +0000 UTC m=+1063.847167526" watchObservedRunningTime="2026-02-17 00:40:37.845217172 +0000 UTC m=+1063.861026590" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.828014 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" event={"ID":"d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9","Type":"ContainerStarted","Data":"b1596a1a35ffc5ff72d0b5a645f7ad2e82d12c8107b51d0286da4ec967df60d1"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.829098 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.837811 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" event={"ID":"797181b9-d095-42dc-9bf6-f87665ba40c5","Type":"ContainerStarted","Data":"511eff16bdff678466b41cb4fbb9e144adc7f67e2160ef7046bd4b11afb5ebb3"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.838569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.840054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" event={"ID":"1fa270c7-9d09-444c-9ccd-70febd3fc194","Type":"ContainerStarted","Data":"ec14dafee05521d5f8e0e679d0610f3a6fe314103d6b71026cee0815499d30c2"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.840550 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.842159 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" event={"ID":"63f821ff-0cb4-4722-87df-511e1758288e","Type":"ContainerStarted","Data":"e8641fafe8ccbf4e76281151b99ffa1b9eb8d5cb5ff5a0c5bc8b0cfc58a893de"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.842465 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.845486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" event={"ID":"46c67b9e-b2a0-4de9-9ecd-581c646896fe","Type":"ContainerStarted","Data":"7a1a877b4f2a6e5c797d95452cfb9ff5f30808db146d124da9123883fdbf0227"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.845942 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.848285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" event={"ID":"e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe","Type":"ContainerStarted","Data":"eb46a56f84b187319566a54de433904f1179fc45349bbe1f74c93159bce283e2"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.848511 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.854659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" event={"ID":"1eea0362-7f54-47ba-9669-c561ebcfd69d","Type":"ContainerStarted","Data":"b42ca4b452cac76e076eae598b874b3fec92e420df22021d67757d4c4ed90ec4"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.855429 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.864937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" event={"ID":"5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2","Type":"ContainerStarted","Data":"1041c98c836a734c2a36937bca1906dd698bab554fe79b837e97ac9d5c07cb56"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.865682 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.866964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" event={"ID":"cf23fb16-30b5-49d7-a204-2140b7afa8dc","Type":"ContainerStarted","Data":"df91d648654318629ea79c3d54ad9a051c30e02e497e551b762a55b8dcc4585a"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.867385 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.868642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" event={"ID":"8d4c5113-e984-4b0c-b1c2-45b31750d654","Type":"ContainerStarted","Data":"cd18e69a69c8e576f4e7106a0d590bbdefb7153616a0fa8a7a8a0def08446001"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.869010 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.869693 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" podStartSLOduration=3.109658835 podStartE2EDuration="15.869679652s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:24.514091931 +0000 UTC m=+1050.529901319" lastFinishedPulling="2026-02-17 00:40:37.274112738 +0000 UTC m=+1063.289922136" observedRunningTime="2026-02-17 00:40:38.865926597 +0000 UTC m=+1064.881735995" watchObservedRunningTime="2026-02-17 00:40:38.869679652 +0000 UTC m=+1064.885489040" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.870013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" event={"ID":"13981a34-157a-433a-bb3b-5ec086dc6506","Type":"ContainerStarted","Data":"804fde14af23ecff952116eedacad73b810513c180346caa5010939a737378dd"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.870523 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.872181 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" event={"ID":"fa1c6038-a220-4d79-8d11-97d0dbbb4b38","Type":"ContainerStarted","Data":"8f90c8ca4d0f256e9a179693126c6c20690428312adacae5a2e09cacbfde1761"} Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.872204 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.898015 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" podStartSLOduration=3.81373601 podStartE2EDuration="15.898000238s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.307466435 +0000 UTC m=+1051.323275833" lastFinishedPulling="2026-02-17 00:40:37.391730643 +0000 UTC m=+1063.407540061" observedRunningTime="2026-02-17 00:40:38.894644375 +0000 UTC m=+1064.910453773" watchObservedRunningTime="2026-02-17 00:40:38.898000238 +0000 UTC m=+1064.913809636" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.941096 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" podStartSLOduration=3.745879967 podStartE2EDuration="15.941080684s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.103518954 +0000 UTC m=+1051.119328352" lastFinishedPulling="2026-02-17 00:40:37.298719671 +0000 UTC m=+1063.314529069" observedRunningTime="2026-02-17 00:40:38.937108623 +0000 UTC m=+1064.952918021" watchObservedRunningTime="2026-02-17 00:40:38.941080684 +0000 UTC m=+1064.956890082" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.943222 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" podStartSLOduration=3.964198247 podStartE2EDuration="15.943215243s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.357655029 +0000 UTC m=+1051.373464427" lastFinishedPulling="2026-02-17 00:40:37.336672025 +0000 UTC m=+1063.352481423" observedRunningTime="2026-02-17 00:40:38.920634876 +0000 UTC m=+1064.936444274" watchObservedRunningTime="2026-02-17 00:40:38.943215243 +0000 UTC m=+1064.959024631" Feb 17 00:40:38 crc kubenswrapper[4805]: I0217 00:40:38.964040 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" podStartSLOduration=3.719668689 podStartE2EDuration="15.964024431s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.049141584 +0000 UTC m=+1051.064950982" lastFinishedPulling="2026-02-17 00:40:37.293497326 +0000 UTC m=+1063.309306724" observedRunningTime="2026-02-17 00:40:38.96399454 +0000 UTC m=+1064.979803938" watchObservedRunningTime="2026-02-17 00:40:38.964024431 +0000 UTC m=+1064.979833829" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.018962 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" podStartSLOduration=4.063515054 podStartE2EDuration="16.018947175s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.317496454 +0000 UTC m=+1051.333305852" lastFinishedPulling="2026-02-17 00:40:37.272928575 +0000 UTC m=+1063.288737973" observedRunningTime="2026-02-17 00:40:39.015484049 +0000 UTC m=+1065.031293447" watchObservedRunningTime="2026-02-17 00:40:39.018947175 +0000 UTC m=+1065.034756573" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.020341 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" podStartSLOduration=3.811757054 podStartE2EDuration="16.020312933s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.116661548 +0000 UTC m=+1051.132470936" lastFinishedPulling="2026-02-17 00:40:37.325217417 +0000 UTC m=+1063.341026815" observedRunningTime="2026-02-17 00:40:39.00003192 +0000 UTC m=+1065.015841328" watchObservedRunningTime="2026-02-17 00:40:39.020312933 +0000 UTC m=+1065.036122331" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.035484 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" podStartSLOduration=4.262791796 podStartE2EDuration="16.035462794s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.552762505 +0000 UTC m=+1051.568571903" lastFinishedPulling="2026-02-17 00:40:37.325433503 +0000 UTC m=+1063.341242901" observedRunningTime="2026-02-17 00:40:39.030847076 +0000 UTC m=+1065.046656474" watchObservedRunningTime="2026-02-17 00:40:39.035462794 +0000 UTC m=+1065.051272192" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.045857 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" podStartSLOduration=3.463326552 podStartE2EDuration="16.045843482s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:24.739134488 +0000 UTC m=+1050.754943886" lastFinishedPulling="2026-02-17 00:40:37.321651418 +0000 UTC m=+1063.337460816" observedRunningTime="2026-02-17 00:40:39.043341133 +0000 UTC m=+1065.059150531" watchObservedRunningTime="2026-02-17 00:40:39.045843482 +0000 UTC m=+1065.061652880" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.068292 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" podStartSLOduration=4.322543785 podStartE2EDuration="16.068276495s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.554784571 +0000 UTC m=+1051.570593969" lastFinishedPulling="2026-02-17 00:40:37.300517291 +0000 UTC m=+1063.316326679" observedRunningTime="2026-02-17 00:40:39.066759513 +0000 UTC m=+1065.082568911" watchObservedRunningTime="2026-02-17 00:40:39.068276495 +0000 UTC m=+1065.084085893" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.111773 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" podStartSLOduration=3.884235297 podStartE2EDuration="16.111758912s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.092463107 +0000 UTC m=+1051.108272505" lastFinishedPulling="2026-02-17 00:40:37.319986722 +0000 UTC m=+1063.335796120" observedRunningTime="2026-02-17 00:40:39.111073993 +0000 UTC m=+1065.126883391" watchObservedRunningTime="2026-02-17 00:40:39.111758912 +0000 UTC m=+1065.127568300" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.116985 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" podStartSLOduration=4.183424323 podStartE2EDuration="16.116976717s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.356303581 +0000 UTC m=+1051.372112979" lastFinishedPulling="2026-02-17 00:40:37.289855975 +0000 UTC m=+1063.305665373" observedRunningTime="2026-02-17 00:40:39.093127775 +0000 UTC m=+1065.108937173" watchObservedRunningTime="2026-02-17 00:40:39.116976717 +0000 UTC m=+1065.132786115" Feb 17 00:40:39 crc kubenswrapper[4805]: I0217 00:40:39.603936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:39 crc kubenswrapper[4805]: E0217 00:40:39.604094 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:39 crc kubenswrapper[4805]: E0217 00:40:39.604149 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert podName:97c634de-ffb7-4340-b622-782ee351de54 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:55.604133911 +0000 UTC m=+1081.619943309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert") pod "infra-operator-controller-manager-79d975b745-lw4pd" (UID: "97c634de-ffb7-4340-b622-782ee351de54") : secret "infra-operator-webhook-server-cert" not found Feb 17 00:40:40 crc kubenswrapper[4805]: I0217 00:40:40.010656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:40 crc kubenswrapper[4805]: E0217 00:40:40.011220 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:40 crc kubenswrapper[4805]: E0217 00:40:40.011306 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert podName:bee5466c-cf0f-4af9-8c9f-f323e814d02d nodeName:}" failed. No retries permitted until 2026-02-17 00:40:56.011279743 +0000 UTC m=+1082.027089181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" (UID: "bee5466c-cf0f-4af9-8c9f-f323e814d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 00:40:40 crc kubenswrapper[4805]: I0217 00:40:40.417072 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:40 crc kubenswrapper[4805]: I0217 00:40:40.417227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:40 crc kubenswrapper[4805]: E0217 00:40:40.417871 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 00:40:40 crc kubenswrapper[4805]: E0217 00:40:40.417933 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs podName:ed86b6a0-d091-482b-8bdb-0d0ae3153733 nodeName:}" failed. No retries permitted until 2026-02-17 00:40:56.417917522 +0000 UTC m=+1082.433726920 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs") pod "openstack-operator-controller-manager-5fddb9857-6r6nf" (UID: "ed86b6a0-d091-482b-8bdb-0d0ae3153733") : secret "webhook-server-cert" not found Feb 17 00:40:40 crc kubenswrapper[4805]: I0217 00:40:40.431735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-metrics-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.901052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" event={"ID":"69a2b32d-8ef4-4bcf-a048-d169e9577f38","Type":"ContainerStarted","Data":"980526f4821b792347515d260721c8b21368b8691a52c6ef5da958887de4a634"} Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.901571 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.904655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" event={"ID":"2f94b9ee-0d59-4dfc-8a01-c506d368327f","Type":"ContainerStarted","Data":"977cf135dd97931c59357d9f5c48fe6ce2cba1a9543af31799ad99c92f6af300"} Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.904885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.925752 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" podStartSLOduration=3.517586491 podStartE2EDuration="19.925731521s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.589484415 +0000 UTC m=+1051.605293813" lastFinishedPulling="2026-02-17 00:40:41.997629455 +0000 UTC m=+1068.013438843" observedRunningTime="2026-02-17 00:40:42.918720256 +0000 UTC m=+1068.934529654" watchObservedRunningTime="2026-02-17 00:40:42.925731521 +0000 UTC m=+1068.941540919" Feb 17 00:40:42 crc kubenswrapper[4805]: I0217 00:40:42.936578 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" podStartSLOduration=3.516686065 podStartE2EDuration="19.936563691s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.594169765 +0000 UTC m=+1051.609979163" lastFinishedPulling="2026-02-17 00:40:42.014047391 +0000 UTC m=+1068.029856789" observedRunningTime="2026-02-17 00:40:42.934465693 +0000 UTC m=+1068.950275081" watchObservedRunningTime="2026-02-17 00:40:42.936563691 +0000 UTC m=+1068.952373089" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.779914 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vjpw9" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.800899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-xmfct" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.867832 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-jbspb" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.913587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" event={"ID":"807e772b-99f0-4578-b462-14b359040c87","Type":"ContainerStarted","Data":"ce6c3675f756b68f853f783445706bbaed76100dbea84886aea5d0f9534259d8"} Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.913991 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.922954 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-9kk4z" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.940868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-8zbbf" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.959741 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" podStartSLOduration=2.748765167 podStartE2EDuration="19.959723125s" podCreationTimestamp="2026-02-17 00:40:24 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.603270347 +0000 UTC m=+1051.619079735" lastFinishedPulling="2026-02-17 00:40:42.814228295 +0000 UTC m=+1068.830037693" observedRunningTime="2026-02-17 00:40:43.930138574 +0000 UTC m=+1069.945947972" watchObservedRunningTime="2026-02-17 00:40:43.959723125 +0000 UTC m=+1069.975532523" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.963063 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-q8nbq" Feb 17 00:40:43 crc kubenswrapper[4805]: I0217 00:40:43.966792 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-7w28d" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.070397 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-n26v4" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.285114 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pmmsh" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.335075 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-sbwq4" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.478314 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-q9dgv" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.567943 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rwt67" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.593956 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-h4vnc" Feb 17 00:40:44 crc kubenswrapper[4805]: I0217 00:40:44.715799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-q8f27" Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.975138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" event={"ID":"da0ffea9-23b4-41d5-b3db-8d76372c949d","Type":"ContainerStarted","Data":"f71f64e8325b25bf56995a7f6403e2bfdc70539f2bf4160583237a2a223c6ff0"} Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.975908 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.978430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" event={"ID":"ad57ab8f-521c-44a5-b5d5-22264e6a79b0","Type":"ContainerStarted","Data":"b79d5aa9b071bb9026cb50970da2238a74ff52d1a599c8324194e0b9e5a81e08"} Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.980569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" event={"ID":"b83736b0-6ae8-4fc4-ab02-f731ce083723","Type":"ContainerStarted","Data":"fdcf069f5dcd7ce5f3a54e02c09aed516c963d767f229cddcb35cabab4758f32"} Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.980797 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:40:50 crc kubenswrapper[4805]: I0217 00:40:50.994628 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" podStartSLOduration=3.417361648 podStartE2EDuration="27.994615559s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.608363269 +0000 UTC m=+1051.624172667" lastFinishedPulling="2026-02-17 00:40:50.18561717 +0000 UTC m=+1076.201426578" observedRunningTime="2026-02-17 00:40:50.991353778 +0000 UTC m=+1077.007163176" watchObservedRunningTime="2026-02-17 00:40:50.994615559 +0000 UTC m=+1077.010424957" Feb 17 00:40:51 crc kubenswrapper[4805]: I0217 00:40:51.013059 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" podStartSLOduration=3.407772971 podStartE2EDuration="28.01304001s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.592693134 +0000 UTC m=+1051.608502532" lastFinishedPulling="2026-02-17 00:40:50.197960133 +0000 UTC m=+1076.213769571" observedRunningTime="2026-02-17 00:40:51.005555823 +0000 UTC m=+1077.021365241" watchObservedRunningTime="2026-02-17 00:40:51.01304001 +0000 UTC m=+1077.028849418" Feb 17 00:40:51 crc kubenswrapper[4805]: I0217 00:40:51.023627 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wkvdc" podStartSLOduration=2.415122025 podStartE2EDuration="27.023607784s" podCreationTimestamp="2026-02-17 00:40:24 +0000 UTC" firstStartedPulling="2026-02-17 00:40:25.600721126 +0000 UTC m=+1051.616530524" lastFinishedPulling="2026-02-17 00:40:50.209206865 +0000 UTC m=+1076.225016283" observedRunningTime="2026-02-17 00:40:51.022688268 +0000 UTC m=+1077.038497676" watchObservedRunningTime="2026-02-17 00:40:51.023607784 +0000 UTC m=+1077.039417182" Feb 17 00:40:53 crc kubenswrapper[4805]: I0217 00:40:53.077748 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:40:53 crc kubenswrapper[4805]: I0217 00:40:53.077840 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:40:54 crc kubenswrapper[4805]: I0217 00:40:54.419101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-klb75" Feb 17 00:40:54 crc kubenswrapper[4805]: I0217 00:40:54.679871 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-prh9h" Feb 17 00:40:54 crc kubenswrapper[4805]: I0217 00:40:54.778951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6phwc" Feb 17 00:40:55 crc kubenswrapper[4805]: I0217 00:40:55.611239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:55 crc kubenswrapper[4805]: I0217 00:40:55.621767 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97c634de-ffb7-4340-b622-782ee351de54-cert\") pod \"infra-operator-controller-manager-79d975b745-lw4pd\" (UID: \"97c634de-ffb7-4340-b622-782ee351de54\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:55 crc kubenswrapper[4805]: I0217 00:40:55.785379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pdrrq" Feb 17 00:40:55 crc kubenswrapper[4805]: I0217 00:40:55.793421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.018281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.028588 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bee5466c-cf0f-4af9-8c9f-f323e814d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd\" (UID: \"bee5466c-cf0f-4af9-8c9f-f323e814d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.049253 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4xgsf" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.056766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.324758 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd"] Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.332605 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd"] Feb 17 00:40:56 crc kubenswrapper[4805]: W0217 00:40:56.332981 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c634de_ffb7_4340_b622_782ee351de54.slice/crio-7253e5af67b5fe475a0e1058c538e5e8fbf337de53db03a4fe0aa2a6b8899ec0 WatchSource:0}: Error finding container 7253e5af67b5fe475a0e1058c538e5e8fbf337de53db03a4fe0aa2a6b8899ec0: Status 404 returned error can't find the container with id 7253e5af67b5fe475a0e1058c538e5e8fbf337de53db03a4fe0aa2a6b8899ec0 Feb 17 00:40:56 crc kubenswrapper[4805]: W0217 00:40:56.336067 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbee5466c_cf0f_4af9_8c9f_f323e814d02d.slice/crio-a3ffa00f5f24efca80e13ece0fa79347c9ed10fa5131a8b54dcf3c63f5dc99f4 WatchSource:0}: Error finding container a3ffa00f5f24efca80e13ece0fa79347c9ed10fa5131a8b54dcf3c63f5dc99f4: Status 404 returned error can't find the container with id a3ffa00f5f24efca80e13ece0fa79347c9ed10fa5131a8b54dcf3c63f5dc99f4 Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.425774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.432829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed86b6a0-d091-482b-8bdb-0d0ae3153733-webhook-certs\") pod \"openstack-operator-controller-manager-5fddb9857-6r6nf\" (UID: \"ed86b6a0-d091-482b-8bdb-0d0ae3153733\") " pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.639643 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tsxhd" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.648760 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:56 crc kubenswrapper[4805]: I0217 00:40:56.976821 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf"] Feb 17 00:40:57 crc kubenswrapper[4805]: I0217 00:40:57.041077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" event={"ID":"bee5466c-cf0f-4af9-8c9f-f323e814d02d","Type":"ContainerStarted","Data":"a3ffa00f5f24efca80e13ece0fa79347c9ed10fa5131a8b54dcf3c63f5dc99f4"} Feb 17 00:40:57 crc kubenswrapper[4805]: I0217 00:40:57.042284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" event={"ID":"97c634de-ffb7-4340-b622-782ee351de54","Type":"ContainerStarted","Data":"7253e5af67b5fe475a0e1058c538e5e8fbf337de53db03a4fe0aa2a6b8899ec0"} Feb 17 00:40:57 crc kubenswrapper[4805]: I0217 00:40:57.043525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" event={"ID":"ed86b6a0-d091-482b-8bdb-0d0ae3153733","Type":"ContainerStarted","Data":"b1a181e33d7444b78cd551e6478f474ea917c1214d7577680908330cd1c80148"} Feb 17 00:40:58 crc kubenswrapper[4805]: I0217 00:40:58.051792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" event={"ID":"ed86b6a0-d091-482b-8bdb-0d0ae3153733","Type":"ContainerStarted","Data":"3efed027047df0b66d28eacb3a1e38ef878a2275686f73a3b5656e61362fd320"} Feb 17 00:40:58 crc kubenswrapper[4805]: I0217 00:40:58.051916 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:40:58 crc kubenswrapper[4805]: I0217 00:40:58.083505 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" podStartSLOduration=34.083487451 podStartE2EDuration="34.083487451s" podCreationTimestamp="2026-02-17 00:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:40:58.073393111 +0000 UTC m=+1084.089202509" watchObservedRunningTime="2026-02-17 00:40:58.083487451 +0000 UTC m=+1084.099296849" Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.101113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" event={"ID":"bee5466c-cf0f-4af9-8c9f-f323e814d02d","Type":"ContainerStarted","Data":"873304d54759e30e78992d567b2c3ea00a05a5073f235ee02bb5e6b83fd39d86"} Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.102136 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.103864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" event={"ID":"97c634de-ffb7-4340-b622-782ee351de54","Type":"ContainerStarted","Data":"a47c37883aefc9b6243ae09a0ea5c9b9f73835bc51446d45d73522f526c714ad"} Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.104085 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.149864 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" podStartSLOduration=34.517941241 podStartE2EDuration="40.149832836s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:56.340162685 +0000 UTC m=+1082.355972103" lastFinishedPulling="2026-02-17 00:41:01.97205426 +0000 UTC m=+1087.987863698" observedRunningTime="2026-02-17 00:41:03.141468404 +0000 UTC m=+1089.157277842" watchObservedRunningTime="2026-02-17 00:41:03.149832836 +0000 UTC m=+1089.165642284" Feb 17 00:41:03 crc kubenswrapper[4805]: I0217 00:41:03.193587 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" podStartSLOduration=34.563761001 podStartE2EDuration="40.193559059s" podCreationTimestamp="2026-02-17 00:40:23 +0000 UTC" firstStartedPulling="2026-02-17 00:40:56.336628836 +0000 UTC m=+1082.352438244" lastFinishedPulling="2026-02-17 00:41:01.966426854 +0000 UTC m=+1087.982236302" observedRunningTime="2026-02-17 00:41:03.167840056 +0000 UTC m=+1089.183649494" watchObservedRunningTime="2026-02-17 00:41:03.193559059 +0000 UTC m=+1089.209368467" Feb 17 00:41:04 crc kubenswrapper[4805]: I0217 00:41:04.396564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kdndp" Feb 17 00:41:04 crc kubenswrapper[4805]: I0217 00:41:04.700479 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6bf489ffd7-pw66z" Feb 17 00:41:06 crc kubenswrapper[4805]: I0217 00:41:06.657020 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fddb9857-6r6nf" Feb 17 00:41:15 crc kubenswrapper[4805]: I0217 00:41:15.801403 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-lw4pd" Feb 17 00:41:16 crc kubenswrapper[4805]: I0217 00:41:16.064728 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd" Feb 17 00:41:23 crc kubenswrapper[4805]: I0217 00:41:23.077426 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:41:23 crc kubenswrapper[4805]: I0217 00:41:23.077871 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:41:23 crc kubenswrapper[4805]: I0217 00:41:23.077937 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:41:23 crc kubenswrapper[4805]: I0217 00:41:23.079047 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:41:23 crc kubenswrapper[4805]: I0217 00:41:23.079299 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620" gracePeriod=600 Feb 17 00:41:24 crc kubenswrapper[4805]: I0217 00:41:24.149173 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620" exitCode=0 Feb 17 00:41:24 crc kubenswrapper[4805]: I0217 00:41:24.149471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620"} Feb 17 00:41:24 crc kubenswrapper[4805]: I0217 00:41:24.149496 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b"} Feb 17 00:41:24 crc kubenswrapper[4805]: I0217 00:41:24.149512 4805 scope.go:117] "RemoveContainer" containerID="3d211867bc1681978ebc5d59d36a82514c65d45557bfedaef2dbb1dd0c87d945" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.291599 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.293417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.308765 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.309984 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.310096 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.310180 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lzwr5" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.330549 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.387250 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dskh\" (UniqueName: \"kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.387499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.469801 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.470959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.485735 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.488045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dskh\" (UniqueName: \"kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.488117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.488186 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rm4x\" (UniqueName: \"kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.488226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.488244 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.489065 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.490932 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.526520 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dskh\" (UniqueName: \"kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh\") pod \"dnsmasq-dns-675f4bcbfc-5wlm8\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.589016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rm4x\" (UniqueName: \"kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.589063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.589084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.589829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.589870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.604897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rm4x\" (UniqueName: \"kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x\") pod \"dnsmasq-dns-78dd6ddcc-r8wm9\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.612936 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:34 crc kubenswrapper[4805]: I0217 00:41:34.785396 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:35 crc kubenswrapper[4805]: I0217 00:41:35.016704 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:35 crc kubenswrapper[4805]: W0217 00:41:35.225092 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5445c48_ba5d_4416_a178_569174ed8792.slice/crio-30b596ef2c01f91190322440264d73e808b0a3c7336ca75671bc97ca590eac8a WatchSource:0}: Error finding container 30b596ef2c01f91190322440264d73e808b0a3c7336ca75671bc97ca590eac8a: Status 404 returned error can't find the container with id 30b596ef2c01f91190322440264d73e808b0a3c7336ca75671bc97ca590eac8a Feb 17 00:41:35 crc kubenswrapper[4805]: I0217 00:41:35.226034 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:35 crc kubenswrapper[4805]: I0217 00:41:35.240756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" event={"ID":"e5445c48-ba5d-4416-a178-569174ed8792","Type":"ContainerStarted","Data":"30b596ef2c01f91190322440264d73e808b0a3c7336ca75671bc97ca590eac8a"} Feb 17 00:41:35 crc kubenswrapper[4805]: I0217 00:41:35.242303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" event={"ID":"1241f903-66c5-4749-8fb5-f20e9b7cbd2c","Type":"ContainerStarted","Data":"8c99c3eccf25b4d39bbdc76ed50cd4d79b093d0676f14cfc9e0e9f43f4de8573"} Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.557265 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.585306 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.590142 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.608957 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.734566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.734693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55l5\" (UniqueName: \"kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.734725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.837905 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.838705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.839221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55l5\" (UniqueName: \"kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.839254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.840925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.865364 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.885439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55l5\" (UniqueName: \"kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5\") pod \"dnsmasq-dns-666b6646f7-drvkf\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.888713 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.906263 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.906379 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:36 crc kubenswrapper[4805]: I0217 00:41:36.934163 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.041475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.041525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8wp\" (UniqueName: \"kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.041586 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.147211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.147562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8wp\" (UniqueName: \"kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.147700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.148966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.149195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.205622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8wp\" (UniqueName: \"kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp\") pod \"dnsmasq-dns-57d769cc4f-qxrvd\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.242301 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.435484 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.661710 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:41:37 crc kubenswrapper[4805]: W0217 00:41:37.666416 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3996a68a_13de_4796_bb04_670cb7288b6d.slice/crio-55aa3d90dfb2e12a08b5b225728a204b4d75b391057a1ad29d62e0219ebc7319 WatchSource:0}: Error finding container 55aa3d90dfb2e12a08b5b225728a204b4d75b391057a1ad29d62e0219ebc7319: Status 404 returned error can't find the container with id 55aa3d90dfb2e12a08b5b225728a204b4d75b391057a1ad29d62e0219ebc7319 Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.718559 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.720600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.724512 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zvbqj" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.724700 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.724827 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.724917 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.725003 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.724937 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.725132 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.730979 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863472 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.863952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.864091 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.864190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.864213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrwhv\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.965740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966113 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrwhv\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966162 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966219 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966967 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.966971 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.967848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.968228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.968311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.968829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.973720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.973828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.978539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.980910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.984612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrwhv\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.994428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " pod="openstack/rabbitmq-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.995178 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.997536 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:37 crc kubenswrapper[4805]: I0217 00:41:37.999948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.000420 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.000464 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.000674 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.003741 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.004053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.004171 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-djq6d" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.012749 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.075974 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.168614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.168677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8l9f\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.168715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.168741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.168872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169034 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.169394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8l9f\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277697 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277766 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277816 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.277856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.279766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.279974 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.280826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.281839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.289889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.296253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.296261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.297170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.320010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.341346 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.341926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8l9f\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.363852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" event={"ID":"3996a68a-13de-4796-bb04-670cb7288b6d","Type":"ContainerStarted","Data":"55aa3d90dfb2e12a08b5b225728a204b4d75b391057a1ad29d62e0219ebc7319"} Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.365873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" event={"ID":"4bd94f0d-589d-4f9c-83a8-b18e848d171b","Type":"ContainerStarted","Data":"94a6878f22248170e11ac4b905f3d8e450091dde74288fe5177518fc1a7742d6"} Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.374204 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:38 crc kubenswrapper[4805]: I0217 00:41:38.677800 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.349024 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.353737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.355780 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.357498 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.357765 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2s2h5" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.362366 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.363181 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.377567 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509813 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vzl\" (UniqueName: \"kubernetes.io/projected/2cc2653c-ccd4-46b3-993c-2447efa79c98-kube-api-access-h8vzl\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509934 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.509971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.510004 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.610966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vzl\" (UniqueName: \"kubernetes.io/projected/2cc2653c-ccd4-46b3-993c-2447efa79c98-kube-api-access-h8vzl\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611228 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.611258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.612436 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.612613 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-default\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.612845 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-kolla-config\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.613727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cc2653c-ccd4-46b3-993c-2447efa79c98-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.614311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2cc2653c-ccd4-46b3-993c-2447efa79c98-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.631277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.635701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc2653c-ccd4-46b3-993c-2447efa79c98-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.650842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vzl\" (UniqueName: \"kubernetes.io/projected/2cc2653c-ccd4-46b3-993c-2447efa79c98-kube-api-access-h8vzl\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.676802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"2cc2653c-ccd4-46b3-993c-2447efa79c98\") " pod="openstack/openstack-galera-0" Feb 17 00:41:39 crc kubenswrapper[4805]: I0217 00:41:39.682240 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.738856 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.740900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.745784 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vmn8t" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.746150 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.746285 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.746573 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.762420 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832773 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832804 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832841 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f85b021d-db5c-4716-b94f-2198c439c614-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxbn\" (UniqueName: \"kubernetes.io/projected/f85b021d-db5c-4716-b94f-2198c439c614-kube-api-access-2vxbn\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.832923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f85b021d-db5c-4716-b94f-2198c439c614-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vxbn\" (UniqueName: \"kubernetes.io/projected/f85b021d-db5c-4716-b94f-2198c439c614-kube-api-access-2vxbn\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.934847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.935856 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f85b021d-db5c-4716-b94f-2198c439c614-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.936505 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.936659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.937208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.937355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f85b021d-db5c-4716-b94f-2198c439c614-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.941629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.945908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85b021d-db5c-4716-b94f-2198c439c614-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.967454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vxbn\" (UniqueName: \"kubernetes.io/projected/f85b021d-db5c-4716-b94f-2198c439c614-kube-api-access-2vxbn\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:40 crc kubenswrapper[4805]: I0217 00:41:40.969355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f85b021d-db5c-4716-b94f-2198c439c614\") " pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.066098 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.222091 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.224017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.230014 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.230449 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5mjl2" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.230742 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.230760 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.365419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.365527 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmsj\" (UniqueName: \"kubernetes.io/projected/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kube-api-access-mbmsj\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.365695 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-config-data\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.365746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kolla-config\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.365869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.467271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.467468 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbmsj\" (UniqueName: \"kubernetes.io/projected/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kube-api-access-mbmsj\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.467839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-config-data\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.467867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kolla-config\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.468668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-config-data\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.468774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.468819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kolla-config\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.470268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.479833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.487021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbmsj\" (UniqueName: \"kubernetes.io/projected/ccaa39fb-d7dc-4011-8b95-cd12af49adc5-kube-api-access-mbmsj\") pod \"memcached-0\" (UID: \"ccaa39fb-d7dc-4011-8b95-cd12af49adc5\") " pod="openstack/memcached-0" Feb 17 00:41:41 crc kubenswrapper[4805]: I0217 00:41:41.580356 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.270704 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.271622 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.274283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wv8xp" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.284516 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.400613 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c67k\" (UniqueName: \"kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k\") pod \"kube-state-metrics-0\" (UID: \"1c79f087-7a87-405e-8a91-8450f22de65d\") " pod="openstack/kube-state-metrics-0" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.505044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c67k\" (UniqueName: \"kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k\") pod \"kube-state-metrics-0\" (UID: \"1c79f087-7a87-405e-8a91-8450f22de65d\") " pod="openstack/kube-state-metrics-0" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.531772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c67k\" (UniqueName: \"kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k\") pod \"kube-state-metrics-0\" (UID: \"1c79f087-7a87-405e-8a91-8450f22de65d\") " pod="openstack/kube-state-metrics-0" Feb 17 00:41:43 crc kubenswrapper[4805]: I0217 00:41:43.599952 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.267047 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.268388 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.273690 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-9bnbd" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.273866 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.290978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.421924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae33ba11-f42a-4134-be89-fbe93e76f0ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.421998 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpld\" (UniqueName: \"kubernetes.io/projected/ae33ba11-f42a-4134-be89-fbe93e76f0ae-kube-api-access-qtpld\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.523626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae33ba11-f42a-4134-be89-fbe93e76f0ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.528583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae33ba11-f42a-4134-be89-fbe93e76f0ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.528783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpld\" (UniqueName: \"kubernetes.io/projected/ae33ba11-f42a-4134-be89-fbe93e76f0ae-kube-api-access-qtpld\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.586227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpld\" (UniqueName: \"kubernetes.io/projected/ae33ba11-f42a-4134-be89-fbe93e76f0ae-kube-api-access-qtpld\") pod \"observability-ui-dashboards-66cbf594b5-lhfgx\" (UID: \"ae33ba11-f42a-4134-be89-fbe93e76f0ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.602751 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.608075 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-545f546dbb-kv52h"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.609499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.623060 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.625205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.629701 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.629944 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9ttfp" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630111 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630214 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630400 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630523 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630618 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.630731 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.636578 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-545f546dbb-kv52h"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.701167 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-oauth-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734223 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-service-ca\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734316 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734380 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734423 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-console-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r556b\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734488 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-oauth-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-trusted-ca-bundle\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734532 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzfgt\" (UniqueName: \"kubernetes.io/projected/86596154-083f-466d-b410-e478418fc73c-kube-api-access-vzfgt\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.734576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840107 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-oauth-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840153 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-trusted-ca-bundle\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzfgt\" (UniqueName: \"kubernetes.io/projected/86596154-083f-466d-b410-e478418fc73c-kube-api-access-vzfgt\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840731 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.841712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-trusted-ca-bundle\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.840252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842578 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-oauth-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-service-ca\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842805 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.842985 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.843067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-console-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.843117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r556b\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.843537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-oauth-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.845063 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.847039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.847074 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.847977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.848261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.848569 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-service-ca\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.848773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.850036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-serving-cert\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.855714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86596154-083f-466d-b410-e478418fc73c-console-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.858194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r556b\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.860119 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.860612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzfgt\" (UniqueName: \"kubernetes.io/projected/86596154-083f-466d-b410-e478418fc73c-kube-api-access-vzfgt\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.867618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86596154-083f-466d-b410-e478418fc73c-console-oauth-config\") pod \"console-545f546dbb-kv52h\" (UID: \"86596154-083f-466d-b410-e478418fc73c\") " pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.874106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.876980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.944827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:41:44 crc kubenswrapper[4805]: I0217 00:41:44.957706 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.884062 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cpgf5"] Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.885458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.888126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.888568 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tphgh" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.888703 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.895981 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5"] Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.908968 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dlg8k"] Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.910800 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.932814 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dlg8k"] Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.988786 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-ovn-controller-tls-certs\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.988830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj99z\" (UniqueName: \"kubernetes.io/projected/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-kube-api-access-hj99z\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-scripts\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989213 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-log-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989254 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-run\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfdtb\" (UniqueName: \"kubernetes.io/projected/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-kube-api-access-bfdtb\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-scripts\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-etc-ovs\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989632 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-combined-ca-bundle\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-log\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:46 crc kubenswrapper[4805]: I0217 00:41:46.989776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-lib\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-ovn-controller-tls-certs\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091095 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj99z\" (UniqueName: \"kubernetes.io/projected/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-kube-api-access-hj99z\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-scripts\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-log-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-run\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfdtb\" (UniqueName: \"kubernetes.io/projected/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-kube-api-access-bfdtb\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-scripts\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-log-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.091808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-run\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-etc-ovs\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092250 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-combined-ca-bundle\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-log\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-lib\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-lib\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run-ovn\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-etc-ovs\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.092819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-var-run\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.093406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-var-log\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.093922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-scripts\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.097789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-scripts\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.102861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-ovn-controller-tls-certs\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.104061 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-combined-ca-bundle\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.106143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj99z\" (UniqueName: \"kubernetes.io/projected/1fc3dff9-1209-4d8b-8927-96f5ffac33f6-kube-api-access-hj99z\") pod \"ovn-controller-cpgf5\" (UID: \"1fc3dff9-1209-4d8b-8927-96f5ffac33f6\") " pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.108348 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfdtb\" (UniqueName: \"kubernetes.io/projected/ff3989a8-bd47-4d94-bf91-47e1dd5f61d8-kube-api-access-bfdtb\") pod \"ovn-controller-ovs-dlg8k\" (UID: \"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8\") " pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.224312 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.240855 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.760113 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.761557 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.769402 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.810745 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.811032 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.811338 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-vjtql" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.811601 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.811845 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.905448 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.905736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfzd\" (UniqueName: \"kubernetes.io/projected/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-kube-api-access-tmfzd\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.905767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.905788 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.905837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.906123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.906175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:47 crc kubenswrapper[4805]: I0217 00:41:47.906206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-config\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfzd\" (UniqueName: \"kubernetes.io/projected/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-kube-api-access-tmfzd\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008464 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.008988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.009084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.009150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.009164 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.009240 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-config\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.010142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-config\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.010865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.013870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.019007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.026940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.029234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfzd\" (UniqueName: \"kubernetes.io/projected/0176eefc-4b9d-4e1f-913e-495ceb0c7c78-kube-api-access-tmfzd\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.042903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0176eefc-4b9d-4e1f-913e-495ceb0c7c78\") " pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:48 crc kubenswrapper[4805]: I0217 00:41:48.111170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.156205 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.161857 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.167629 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.174652 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.175001 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.175053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zs9kj" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.199343 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250538 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgv9\" (UniqueName: \"kubernetes.io/projected/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-kube-api-access-9zgv9\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250712 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-config\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.250966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.251023 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352353 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-config\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352536 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352663 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgv9\" (UniqueName: \"kubernetes.io/projected/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-kube-api-access-9zgv9\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.352708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.353433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-config\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.353764 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.354447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.355203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.359941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.360759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.361966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.377963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgv9\" (UniqueName: \"kubernetes.io/projected/e51af0b4-1c0c-4763-81f7-bf6ca2776b80-kube-api-access-9zgv9\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.391919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e51af0b4-1c0c-4763-81f7-bf6ca2776b80\") " pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:50 crc kubenswrapper[4805]: I0217 00:41:50.499011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 00:41:52 crc kubenswrapper[4805]: I0217 00:41:52.024236 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.927360 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.928037 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rm4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-r8wm9_openstack(e5445c48-ba5d-4416-a178-569174ed8792): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.929725 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" podUID="e5445c48-ba5d-4416-a178-569174ed8792" Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.937299 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.937574 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dskh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-5wlm8_openstack(1241f903-66c5-4749-8fb5-f20e9b7cbd2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:41:52 crc kubenswrapper[4805]: E0217 00:41:52.938882 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" podUID="1241f903-66c5-4749-8fb5-f20e9b7cbd2c" Feb 17 00:41:53 crc kubenswrapper[4805]: E0217 00:41:53.104704 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 00:41:53 crc kubenswrapper[4805]: E0217 00:41:53.104834 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s55l5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-drvkf_openstack(4bd94f0d-589d-4f9c-83a8-b18e848d171b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:41:53 crc kubenswrapper[4805]: E0217 00:41:53.106208 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.413740 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.536784 4805 generic.go:334] "Generic (PLEG): container finished" podID="3996a68a-13de-4796-bb04-670cb7288b6d" containerID="718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5" exitCode=0 Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.537029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" event={"ID":"3996a68a-13de-4796-bb04-670cb7288b6d","Type":"ContainerDied","Data":"718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5"} Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.542664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f85b021d-db5c-4716-b94f-2198c439c614","Type":"ContainerStarted","Data":"bf247cda9f1bc26d29b1126d4d2e5f0e9bf714718d898a94c66b3bae99b0c346"} Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.544065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cc2653c-ccd4-46b3-993c-2447efa79c98","Type":"ContainerStarted","Data":"cf8ca94a6da0414147d0947920ae6c94a46da82f7a3974ff09f3122afee5194a"} Feb 17 00:41:53 crc kubenswrapper[4805]: I0217 00:41:53.655057 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.257972 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx"] Feb 17 00:41:54 crc kubenswrapper[4805]: W0217 00:41:54.263503 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c79f087_7a87_405e_8a91_8450f22de65d.slice/crio-cd029041cb1c03b2678f6f6a1a8b65e377894148ff84abf8d8d5308f15453286 WatchSource:0}: Error finding container cd029041cb1c03b2678f6f6a1a8b65e377894148ff84abf8d8d5308f15453286: Status 404 returned error can't find the container with id cd029041cb1c03b2678f6f6a1a8b65e377894148ff84abf8d8d5308f15453286 Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.266882 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:41:54 crc kubenswrapper[4805]: W0217 00:41:54.268407 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae33ba11_f42a_4134_be89_fbe93e76f0ae.slice/crio-1faa3f0a5131d452224702f94352c686de14d30e9eab3d2098a4daec57e0f313 WatchSource:0}: Error finding container 1faa3f0a5131d452224702f94352c686de14d30e9eab3d2098a4daec57e0f313: Status 404 returned error can't find the container with id 1faa3f0a5131d452224702f94352c686de14d30e9eab3d2098a4daec57e0f313 Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.326018 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.334531 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.422371 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:41:54 crc kubenswrapper[4805]: W0217 00:41:54.428870 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc55b214_5b43_49cd_aadb_967188b34da1.slice/crio-13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6 WatchSource:0}: Error finding container 13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6: Status 404 returned error can't find the container with id 13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6 Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.429840 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.448290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config\") pod \"e5445c48-ba5d-4416-a178-569174ed8792\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.448494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dskh\" (UniqueName: \"kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh\") pod \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.448532 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config\") pod \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\" (UID: \"1241f903-66c5-4749-8fb5-f20e9b7cbd2c\") " Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.448567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc\") pod \"e5445c48-ba5d-4416-a178-569174ed8792\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.448598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rm4x\" (UniqueName: \"kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x\") pod \"e5445c48-ba5d-4416-a178-569174ed8792\" (UID: \"e5445c48-ba5d-4416-a178-569174ed8792\") " Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.454458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh" (OuterVolumeSpecName: "kube-api-access-2dskh") pod "1241f903-66c5-4749-8fb5-f20e9b7cbd2c" (UID: "1241f903-66c5-4749-8fb5-f20e9b7cbd2c"). InnerVolumeSpecName "kube-api-access-2dskh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.455051 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x" (OuterVolumeSpecName: "kube-api-access-5rm4x") pod "e5445c48-ba5d-4416-a178-569174ed8792" (UID: "e5445c48-ba5d-4416-a178-569174ed8792"). InnerVolumeSpecName "kube-api-access-5rm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.454910 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config" (OuterVolumeSpecName: "config") pod "e5445c48-ba5d-4416-a178-569174ed8792" (UID: "e5445c48-ba5d-4416-a178-569174ed8792"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.455344 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5445c48-ba5d-4416-a178-569174ed8792" (UID: "e5445c48-ba5d-4416-a178-569174ed8792"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.458857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config" (OuterVolumeSpecName: "config") pod "1241f903-66c5-4749-8fb5-f20e9b7cbd2c" (UID: "1241f903-66c5-4749-8fb5-f20e9b7cbd2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.469826 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-545f546dbb-kv52h"] Feb 17 00:41:54 crc kubenswrapper[4805]: W0217 00:41:54.470640 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86596154_083f_466d_b410_e478418fc73c.slice/crio-32dbf980e011a7cbfbd1482dbd5a380efbacf07e3e6f8c09c4eff8f528175fdd WatchSource:0}: Error finding container 32dbf980e011a7cbfbd1482dbd5a380efbacf07e3e6f8c09c4eff8f528175fdd: Status 404 returned error can't find the container with id 32dbf980e011a7cbfbd1482dbd5a380efbacf07e3e6f8c09c4eff8f528175fdd Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.550520 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.550545 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dskh\" (UniqueName: \"kubernetes.io/projected/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-kube-api-access-2dskh\") on node \"crc\" DevicePath \"\"" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.550556 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1241f903-66c5-4749-8fb5-f20e9b7cbd2c-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.550564 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5445c48-ba5d-4416-a178-569174ed8792-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.550574 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rm4x\" (UniqueName: \"kubernetes.io/projected/e5445c48-ba5d-4416-a178-569174ed8792-kube-api-access-5rm4x\") on node \"crc\" DevicePath \"\"" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.555393 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerStarted","Data":"13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.559494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" event={"ID":"3996a68a-13de-4796-bb04-670cb7288b6d","Type":"ContainerStarted","Data":"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.564154 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerID="3f3b617a340f0a0f70655c35a5fb56a7f7cf435cf5856130519f7d873e986460" exitCode=0 Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.564214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" event={"ID":"4bd94f0d-589d-4f9c-83a8-b18e848d171b","Type":"ContainerDied","Data":"3f3b617a340f0a0f70655c35a5fb56a7f7cf435cf5856130519f7d873e986460"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.566085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerStarted","Data":"44cd75bca52272431f59ccf51860d68ab7b88ca516662a4b55a8c117ea2585a7"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.567120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" event={"ID":"ae33ba11-f42a-4134-be89-fbe93e76f0ae","Type":"ContainerStarted","Data":"1faa3f0a5131d452224702f94352c686de14d30e9eab3d2098a4daec57e0f313"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.581596 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" podStartSLOduration=3.09584767 podStartE2EDuration="18.581574435s" podCreationTimestamp="2026-02-17 00:41:36 +0000 UTC" firstStartedPulling="2026-02-17 00:41:37.669532675 +0000 UTC m=+1123.685342073" lastFinishedPulling="2026-02-17 00:41:53.15525944 +0000 UTC m=+1139.171068838" observedRunningTime="2026-02-17 00:41:54.574521969 +0000 UTC m=+1140.590331367" watchObservedRunningTime="2026-02-17 00:41:54.581574435 +0000 UTC m=+1140.597383833" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.582304 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ccaa39fb-d7dc-4011-8b95-cd12af49adc5","Type":"ContainerStarted","Data":"fcec51d441be929c0b3c1588014168704f3963788a230982f072473a24ecc46a"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.586199 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c79f087-7a87-405e-8a91-8450f22de65d","Type":"ContainerStarted","Data":"cd029041cb1c03b2678f6f6a1a8b65e377894148ff84abf8d8d5308f15453286"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.589972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-545f546dbb-kv52h" event={"ID":"86596154-083f-466d-b410-e478418fc73c","Type":"ContainerStarted","Data":"32dbf980e011a7cbfbd1482dbd5a380efbacf07e3e6f8c09c4eff8f528175fdd"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.593580 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" event={"ID":"e5445c48-ba5d-4416-a178-569174ed8792","Type":"ContainerDied","Data":"30b596ef2c01f91190322440264d73e808b0a3c7336ca75671bc97ca590eac8a"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.593702 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-r8wm9" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.603465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" event={"ID":"1241f903-66c5-4749-8fb5-f20e9b7cbd2c","Type":"ContainerDied","Data":"8c99c3eccf25b4d39bbdc76ed50cd4d79b093d0676f14cfc9e0e9f43f4de8573"} Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.603518 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-5wlm8" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.666395 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.670391 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-r8wm9"] Feb 17 00:41:54 crc kubenswrapper[4805]: W0217 00:41:54.705416 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e80aa4a_3260_4111_a066_112ffac85ae7.slice/crio-6f3c35c883a9690f8b216b3c591981d5f8490dd76706bc35d9f97dfce652695a WatchSource:0}: Error finding container 6f3c35c883a9690f8b216b3c591981d5f8490dd76706bc35d9f97dfce652695a: Status 404 returned error can't find the container with id 6f3c35c883a9690f8b216b3c591981d5f8490dd76706bc35d9f97dfce652695a Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.707643 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.714179 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.721896 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-5wlm8"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.730246 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.773448 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dlg8k"] Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.816202 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1241f903-66c5-4749-8fb5-f20e9b7cbd2c" path="/var/lib/kubelet/pods/1241f903-66c5-4749-8fb5-f20e9b7cbd2c/volumes" Feb 17 00:41:54 crc kubenswrapper[4805]: I0217 00:41:54.816628 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5445c48-ba5d-4416-a178-569174ed8792" path="/var/lib/kubelet/pods/e5445c48-ba5d-4416-a178-569174ed8792/volumes" Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.262362 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.612962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerStarted","Data":"6f3c35c883a9690f8b216b3c591981d5f8490dd76706bc35d9f97dfce652695a"} Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.615263 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5" event={"ID":"1fc3dff9-1209-4d8b-8927-96f5ffac33f6","Type":"ContainerStarted","Data":"ebd62517130dc08a765eafdbffa437b4ae9fa2d36ed829b0c91cfeb3b9522c4f"} Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.616910 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-545f546dbb-kv52h" event={"ID":"86596154-083f-466d-b410-e478418fc73c","Type":"ContainerStarted","Data":"afdecf514f45a2335cf7d70ed270b78d77e3774fd6f461d3dd792826a88cea3b"} Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.622059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" event={"ID":"4bd94f0d-589d-4f9c-83a8-b18e848d171b","Type":"ContainerStarted","Data":"03a81074670b871b67c288bbb3d27f08695f4b80627a346e3405c5d2afdf5087"} Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.622231 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.624816 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dlg8k" event={"ID":"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8","Type":"ContainerStarted","Data":"d01565dd46e9f8b8a107d0836f362fd5463cac2077fb733a6a16af2febe17461"} Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.624969 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.638850 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-545f546dbb-kv52h" podStartSLOduration=11.638831145 podStartE2EDuration="11.638831145s" podCreationTimestamp="2026-02-17 00:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:41:55.630240697 +0000 UTC m=+1141.646050115" watchObservedRunningTime="2026-02-17 00:41:55.638831145 +0000 UTC m=+1141.654640543" Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.650125 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" podStartSLOduration=-9223372017.20468 podStartE2EDuration="19.650095968s" podCreationTimestamp="2026-02-17 00:41:36 +0000 UTC" firstStartedPulling="2026-02-17 00:41:37.449187668 +0000 UTC m=+1123.464997066" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:41:55.648051721 +0000 UTC m=+1141.663861119" watchObservedRunningTime="2026-02-17 00:41:55.650095968 +0000 UTC m=+1141.665905366" Feb 17 00:41:55 crc kubenswrapper[4805]: I0217 00:41:55.720526 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 00:41:55 crc kubenswrapper[4805]: W0217 00:41:55.754972 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode51af0b4_1c0c_4763_81f7_bf6ca2776b80.slice/crio-ec5d0bbba93281da04f04b452eb057b21cf9aaa49eaef258d44fefb7ac353fb2 WatchSource:0}: Error finding container ec5d0bbba93281da04f04b452eb057b21cf9aaa49eaef258d44fefb7ac353fb2: Status 404 returned error can't find the container with id ec5d0bbba93281da04f04b452eb057b21cf9aaa49eaef258d44fefb7ac353fb2 Feb 17 00:41:56 crc kubenswrapper[4805]: I0217 00:41:56.753161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e51af0b4-1c0c-4763-81f7-bf6ca2776b80","Type":"ContainerStarted","Data":"ec5d0bbba93281da04f04b452eb057b21cf9aaa49eaef258d44fefb7ac353fb2"} Feb 17 00:41:56 crc kubenswrapper[4805]: I0217 00:41:56.758270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0176eefc-4b9d-4e1f-913e-495ceb0c7c78","Type":"ContainerStarted","Data":"c83b38d673cb46540f0bf7bce6ab42eeccf78af5b9d177acabfdbb7a80e091c4"} Feb 17 00:42:01 crc kubenswrapper[4805]: I0217 00:42:01.938547 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:42:02 crc kubenswrapper[4805]: I0217 00:42:02.244508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:42:02 crc kubenswrapper[4805]: I0217 00:42:02.303655 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:42:02 crc kubenswrapper[4805]: I0217 00:42:02.813064 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="dnsmasq-dns" containerID="cri-o://03a81074670b871b67c288bbb3d27f08695f4b80627a346e3405c5d2afdf5087" gracePeriod=10 Feb 17 00:42:04 crc kubenswrapper[4805]: I0217 00:42:04.851098 4805 generic.go:334] "Generic (PLEG): container finished" podID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerID="03a81074670b871b67c288bbb3d27f08695f4b80627a346e3405c5d2afdf5087" exitCode=0 Feb 17 00:42:04 crc kubenswrapper[4805]: I0217 00:42:04.851144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" event={"ID":"4bd94f0d-589d-4f9c-83a8-b18e848d171b","Type":"ContainerDied","Data":"03a81074670b871b67c288bbb3d27f08695f4b80627a346e3405c5d2afdf5087"} Feb 17 00:42:04 crc kubenswrapper[4805]: I0217 00:42:04.945385 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:42:04 crc kubenswrapper[4805]: I0217 00:42:04.945647 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:42:04 crc kubenswrapper[4805]: I0217 00:42:04.950446 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:42:05 crc kubenswrapper[4805]: I0217 00:42:05.864742 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-545f546dbb-kv52h" Feb 17 00:42:05 crc kubenswrapper[4805]: I0217 00:42:05.935449 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:42:06 crc kubenswrapper[4805]: I0217 00:42:06.935945 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.863898 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.915160 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" event={"ID":"4bd94f0d-589d-4f9c-83a8-b18e848d171b","Type":"ContainerDied","Data":"94a6878f22248170e11ac4b905f3d8e450091dde74288fe5177518fc1a7742d6"} Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.915220 4805 scope.go:117] "RemoveContainer" containerID="03a81074670b871b67c288bbb3d27f08695f4b80627a346e3405c5d2afdf5087" Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.915252 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-drvkf" Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.977742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s55l5\" (UniqueName: \"kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5\") pod \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.977913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc\") pod \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.978032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config\") pod \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\" (UID: \"4bd94f0d-589d-4f9c-83a8-b18e848d171b\") " Feb 17 00:42:07 crc kubenswrapper[4805]: I0217 00:42:07.981076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5" (OuterVolumeSpecName: "kube-api-access-s55l5") pod "4bd94f0d-589d-4f9c-83a8-b18e848d171b" (UID: "4bd94f0d-589d-4f9c-83a8-b18e848d171b"). InnerVolumeSpecName "kube-api-access-s55l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.016696 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config" (OuterVolumeSpecName: "config") pod "4bd94f0d-589d-4f9c-83a8-b18e848d171b" (UID: "4bd94f0d-589d-4f9c-83a8-b18e848d171b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.023919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bd94f0d-589d-4f9c-83a8-b18e848d171b" (UID: "4bd94f0d-589d-4f9c-83a8-b18e848d171b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.080625 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s55l5\" (UniqueName: \"kubernetes.io/projected/4bd94f0d-589d-4f9c-83a8-b18e848d171b-kube-api-access-s55l5\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.080663 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.080678 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bd94f0d-589d-4f9c-83a8-b18e848d171b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.260498 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.273307 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-drvkf"] Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.761898 4805 scope.go:117] "RemoveContainer" containerID="3f3b617a340f0a0f70655c35a5fb56a7f7cf435cf5856130519f7d873e986460" Feb 17 00:42:08 crc kubenswrapper[4805]: I0217 00:42:08.794768 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" path="/var/lib/kubelet/pods/4bd94f0d-589d-4f9c-83a8-b18e848d171b/volumes" Feb 17 00:42:09 crc kubenswrapper[4805]: E0217 00:42:09.196868 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 00:42:09 crc kubenswrapper[4805]: E0217 00:42:09.196936 4805 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 00:42:09 crc kubenswrapper[4805]: E0217 00:42:09.197280 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c67k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(1c79f087-7a87-405e-8a91-8450f22de65d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 00:42:09 crc kubenswrapper[4805]: E0217 00:42:09.198673 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" Feb 17 00:42:09 crc kubenswrapper[4805]: E0217 00:42:09.942135 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.055020 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gp5nc"] Feb 17 00:42:10 crc kubenswrapper[4805]: E0217 00:42:10.055464 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="dnsmasq-dns" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.055484 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="dnsmasq-dns" Feb 17 00:42:10 crc kubenswrapper[4805]: E0217 00:42:10.055534 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="init" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.055543 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="init" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.055760 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd94f0d-589d-4f9c-83a8-b18e848d171b" containerName="dnsmasq-dns" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.056494 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.064039 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.083361 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gp5nc"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdswv\" (UniqueName: \"kubernetes.io/projected/c76aae77-30fe-4644-96a9-4c4d2978e3d2-kube-api-access-kdswv\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-combined-ca-bundle\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovn-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117519 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovs-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.117572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76aae77-30fe-4644-96a9-4c4d2978e3d2-config\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.214750 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.216171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220722 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdswv\" (UniqueName: \"kubernetes.io/projected/c76aae77-30fe-4644-96a9-4c4d2978e3d2-kube-api-access-kdswv\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-combined-ca-bundle\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovn-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovs-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.220988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76aae77-30fe-4644-96a9-4c4d2978e3d2-config\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.222614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovn-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.222685 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c76aae77-30fe-4644-96a9-4c4d2978e3d2-ovs-rundir\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.223653 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.225339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76aae77-30fe-4644-96a9-4c4d2978e3d2-config\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.239388 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.262999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdswv\" (UniqueName: \"kubernetes.io/projected/c76aae77-30fe-4644-96a9-4c4d2978e3d2-kube-api-access-kdswv\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.263231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-combined-ca-bundle\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.263565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76aae77-30fe-4644-96a9-4c4d2978e3d2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gp5nc\" (UID: \"c76aae77-30fe-4644-96a9-4c4d2978e3d2\") " pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.323070 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.323485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.323523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.323560 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgfh\" (UniqueName: \"kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.416663 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.425262 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.425497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.425548 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.425599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgfh\" (UniqueName: \"kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.426301 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.426381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.426632 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.441148 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.442930 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.447271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.454119 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.514709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgfh\" (UniqueName: \"kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh\") pod \"dnsmasq-dns-7fd796d7df-rq4k4\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.549509 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfmz\" (UniqueName: \"kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.549598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.549644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.549668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.549732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.650940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.650992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.651061 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.651142 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfmz\" (UniqueName: \"kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.651196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.652217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.652269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.652783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.653017 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.686514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfmz\" (UniqueName: \"kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz\") pod \"dnsmasq-dns-86db49b7ff-xl4vd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.949363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cc2653c-ccd4-46b3-993c-2447efa79c98","Type":"ContainerStarted","Data":"5ce489f1a5c17e8f42d5825136a8a8dbfdd0e53a98e83b60d3c6e1408cf021c0"} Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.951377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dlg8k" event={"ID":"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8","Type":"ContainerStarted","Data":"a74f64e70201edd13520a012bf479f86e473247029cd16cc8c7b0a471a6ca860"} Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.952604 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ccaa39fb-d7dc-4011-8b95-cd12af49adc5","Type":"ContainerStarted","Data":"a79bbc3625882950470b5458a3d21a3ac46d7c30154c02d3910f833ea5fb7578"} Feb 17 00:42:10 crc kubenswrapper[4805]: I0217 00:42:10.953041 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.031780 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.026867059 podStartE2EDuration="30.031761964s" podCreationTimestamp="2026-02-17 00:41:41 +0000 UTC" firstStartedPulling="2026-02-17 00:41:53.705312939 +0000 UTC m=+1139.721122337" lastFinishedPulling="2026-02-17 00:42:07.710207844 +0000 UTC m=+1153.726017242" observedRunningTime="2026-02-17 00:42:11.025813459 +0000 UTC m=+1157.041622857" watchObservedRunningTime="2026-02-17 00:42:11.031761964 +0000 UTC m=+1157.047571362" Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.225369 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gp5nc" Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.237153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.248146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.750008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gp5nc"] Feb 17 00:42:11 crc kubenswrapper[4805]: W0217 00:42:11.755722 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc76aae77_30fe_4644_96a9_4c4d2978e3d2.slice/crio-823827949fafdfca39d2615d68f889833feacfbac2401b9b1523f6c500742039 WatchSource:0}: Error finding container 823827949fafdfca39d2615d68f889833feacfbac2401b9b1523f6c500742039: Status 404 returned error can't find the container with id 823827949fafdfca39d2615d68f889833feacfbac2401b9b1523f6c500742039 Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.811141 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:11 crc kubenswrapper[4805]: W0217 00:42:11.813423 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d37dc7_7d79_4fcf_8971_743ef480eedd.slice/crio-0c47682855bddf1977fad33659828f077fb63966693b84fb3bd4e0bb2d2d0f89 WatchSource:0}: Error finding container 0c47682855bddf1977fad33659828f077fb63966693b84fb3bd4e0bb2d2d0f89: Status 404 returned error can't find the container with id 0c47682855bddf1977fad33659828f077fb63966693b84fb3bd4e0bb2d2d0f89 Feb 17 00:42:11 crc kubenswrapper[4805]: W0217 00:42:11.823995 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af3e2a3_582f_4ae9_a637_a2ec3667c9f9.slice/crio-6a764426004cfd4999bb250770de5da667576acbb3d4af079361abbae4b82c4c WatchSource:0}: Error finding container 6a764426004cfd4999bb250770de5da667576acbb3d4af079361abbae4b82c4c: Status 404 returned error can't find the container with id 6a764426004cfd4999bb250770de5da667576acbb3d4af079361abbae4b82c4c Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.824280 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.960476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" event={"ID":"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9","Type":"ContainerStarted","Data":"6a764426004cfd4999bb250770de5da667576acbb3d4af079361abbae4b82c4c"} Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.962302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerStarted","Data":"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c"} Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.964214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gp5nc" event={"ID":"c76aae77-30fe-4644-96a9-4c4d2978e3d2","Type":"ContainerStarted","Data":"823827949fafdfca39d2615d68f889833feacfbac2401b9b1523f6c500742039"} Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.978416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" event={"ID":"94d37dc7-7d79-4fcf-8971-743ef480eedd","Type":"ContainerStarted","Data":"0c47682855bddf1977fad33659828f077fb63966693b84fb3bd4e0bb2d2d0f89"} Feb 17 00:42:11 crc kubenswrapper[4805]: I0217 00:42:11.979951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerStarted","Data":"937219e051ca008592afb84a19bc551c316843281575cc9779fe5a8e5ffe5bd5"} Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.990927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerStarted","Data":"b2ec6aba0a414f7c3f330c820f068c7222c2a2073eb826738cfea615cea07ffd"} Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.994458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5" event={"ID":"1fc3dff9-1209-4d8b-8927-96f5ffac33f6","Type":"ContainerStarted","Data":"76d370b9d2766b69ca3dfcd48b2ac1639e9082896ecfc96f99257c29f158df43"} Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.994762 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cpgf5" Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.996537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0176eefc-4b9d-4e1f-913e-495ceb0c7c78","Type":"ContainerStarted","Data":"a68d28f77146e52820907a1479e9353355e10da039fb5b6b77bf297c039479e0"} Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.998180 4805 generic.go:334] "Generic (PLEG): container finished" podID="2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" containerID="7d3a0269dd3749e7a3d46e0885d6f899179663e6f4bae4e261c8403dd1501548" exitCode=0 Feb 17 00:42:12 crc kubenswrapper[4805]: I0217 00:42:12.998242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" event={"ID":"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9","Type":"ContainerDied","Data":"7d3a0269dd3749e7a3d46e0885d6f899179663e6f4bae4e261c8403dd1501548"} Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.000288 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f85b021d-db5c-4716-b94f-2198c439c614","Type":"ContainerStarted","Data":"bb502dc8418948fb62a847fb824f014bbef28c073d83be4d45970a185986f097"} Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.002611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e51af0b4-1c0c-4763-81f7-bf6ca2776b80","Type":"ContainerStarted","Data":"e4f06dbbbc41c5fbc5bd052e0466044e240f77a55cc4d0b8df919aab52ec3ce8"} Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.004090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" event={"ID":"ae33ba11-f42a-4134-be89-fbe93e76f0ae","Type":"ContainerStarted","Data":"57b06ba4f6fac7b7b2b8ec29b237a35e19ebaa29a988919f7baa6889cf6e60af"} Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.005623 4805 generic.go:334] "Generic (PLEG): container finished" podID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerID="949eb0cabf1023f0c03621ef79beedee7e0c985ddb934e53aa2edad049af5d21" exitCode=0 Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.005697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" event={"ID":"94d37dc7-7d79-4fcf-8971-743ef480eedd","Type":"ContainerDied","Data":"949eb0cabf1023f0c03621ef79beedee7e0c985ddb934e53aa2edad049af5d21"} Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.075417 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-lhfgx" podStartSLOduration=14.152480344 podStartE2EDuration="29.075399245s" podCreationTimestamp="2026-02-17 00:41:44 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.271472366 +0000 UTC m=+1140.287281764" lastFinishedPulling="2026-02-17 00:42:09.194391247 +0000 UTC m=+1155.210200665" observedRunningTime="2026-02-17 00:42:13.073307377 +0000 UTC m=+1159.089116775" watchObservedRunningTime="2026-02-17 00:42:13.075399245 +0000 UTC m=+1159.091208643" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.213545 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cpgf5" podStartSLOduration=13.202151774 podStartE2EDuration="27.21352264s" podCreationTimestamp="2026-02-17 00:41:46 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.739230051 +0000 UTC m=+1140.755039459" lastFinishedPulling="2026-02-17 00:42:08.750600887 +0000 UTC m=+1154.766410325" observedRunningTime="2026-02-17 00:42:13.205886428 +0000 UTC m=+1159.221695826" watchObservedRunningTime="2026-02-17 00:42:13.21352264 +0000 UTC m=+1159.229332058" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.546471 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.601936 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb\") pod \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.601993 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config\") pod \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.602177 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blgfh\" (UniqueName: \"kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh\") pod \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.602256 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc\") pod \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\" (UID: \"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9\") " Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.608664 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh" (OuterVolumeSpecName: "kube-api-access-blgfh") pod "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" (UID: "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9"). InnerVolumeSpecName "kube-api-access-blgfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.624302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config" (OuterVolumeSpecName: "config") pod "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" (UID: "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.626520 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" (UID: "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.634480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" (UID: "2af3e2a3-582f-4ae9-a637-a2ec3667c9f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.704817 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blgfh\" (UniqueName: \"kubernetes.io/projected/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-kube-api-access-blgfh\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.704854 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.704867 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:13 crc kubenswrapper[4805]: I0217 00:42:13.704877 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.014921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" event={"ID":"94d37dc7-7d79-4fcf-8971-743ef480eedd","Type":"ContainerStarted","Data":"cbbdbb93028a3695caef28dec2d7dd4bf33469bd8219924081b3a82e467ebf39"} Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.016202 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.017689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" event={"ID":"2af3e2a3-582f-4ae9-a637-a2ec3667c9f9","Type":"ContainerDied","Data":"6a764426004cfd4999bb250770de5da667576acbb3d4af079361abbae4b82c4c"} Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.017733 4805 scope.go:117] "RemoveContainer" containerID="7d3a0269dd3749e7a3d46e0885d6f899179663e6f4bae4e261c8403dd1501548" Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.017890 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-rq4k4" Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.021446 4805 generic.go:334] "Generic (PLEG): container finished" podID="ff3989a8-bd47-4d94-bf91-47e1dd5f61d8" containerID="a74f64e70201edd13520a012bf479f86e473247029cd16cc8c7b0a471a6ca860" exitCode=0 Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.021554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dlg8k" event={"ID":"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8","Type":"ContainerDied","Data":"a74f64e70201edd13520a012bf479f86e473247029cd16cc8c7b0a471a6ca860"} Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.048751 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" podStartSLOduration=4.048732496 podStartE2EDuration="4.048732496s" podCreationTimestamp="2026-02-17 00:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:14.040031895 +0000 UTC m=+1160.055841293" watchObservedRunningTime="2026-02-17 00:42:14.048732496 +0000 UTC m=+1160.064541894" Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.103481 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.107080 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-rq4k4"] Feb 17 00:42:14 crc kubenswrapper[4805]: I0217 00:42:14.812038 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" path="/var/lib/kubelet/pods/2af3e2a3-582f-4ae9-a637-a2ec3667c9f9/volumes" Feb 17 00:42:15 crc kubenswrapper[4805]: I0217 00:42:15.036398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dlg8k" event={"ID":"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8","Type":"ContainerStarted","Data":"eb1707d6dabb8183a1609f4411664efd66bb7b4f031bb35a12ccc746b821ff30"} Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.048591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0176eefc-4b9d-4e1f-913e-495ceb0c7c78","Type":"ContainerStarted","Data":"0fc442944bfbd01004d5c09f27418013ca1a9b5aeb67836d2c15ee5d5c6f16c8"} Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.050197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dlg8k" event={"ID":"ff3989a8-bd47-4d94-bf91-47e1dd5f61d8","Type":"ContainerStarted","Data":"e042267edffee69a376cea6d18296e982d813f9acba3f40a2a5e8e5636722096"} Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.050828 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.050867 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.052503 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e51af0b4-1c0c-4763-81f7-bf6ca2776b80","Type":"ContainerStarted","Data":"2602fb46614c4744059ce7ad966d6d57a8f18fc02f687dc4bc6728ec3c7be916"} Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.053680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gp5nc" event={"ID":"c76aae77-30fe-4644-96a9-4c4d2978e3d2","Type":"ContainerStarted","Data":"b8d887116f6b3c624d93632788065408c4c70016932054fe2c491c6629f44ee2"} Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.074553 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.690220992 podStartE2EDuration="30.074530463s" podCreationTimestamp="2026-02-17 00:41:46 +0000 UTC" firstStartedPulling="2026-02-17 00:41:55.753744295 +0000 UTC m=+1141.769553713" lastFinishedPulling="2026-02-17 00:42:15.138053786 +0000 UTC m=+1161.153863184" observedRunningTime="2026-02-17 00:42:16.069601456 +0000 UTC m=+1162.085410854" watchObservedRunningTime="2026-02-17 00:42:16.074530463 +0000 UTC m=+1162.090339851" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.095628 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dlg8k" podStartSLOduration=17.000984283 podStartE2EDuration="30.095612769s" podCreationTimestamp="2026-02-17 00:41:46 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.831158313 +0000 UTC m=+1140.846967711" lastFinishedPulling="2026-02-17 00:42:07.925786799 +0000 UTC m=+1153.941596197" observedRunningTime="2026-02-17 00:42:16.090926208 +0000 UTC m=+1162.106735606" watchObservedRunningTime="2026-02-17 00:42:16.095612769 +0000 UTC m=+1162.111422167" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.117243 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.76577743 podStartE2EDuration="27.117219368s" podCreationTimestamp="2026-02-17 00:41:49 +0000 UTC" firstStartedPulling="2026-02-17 00:41:55.760195134 +0000 UTC m=+1141.776004532" lastFinishedPulling="2026-02-17 00:42:15.111637072 +0000 UTC m=+1161.127446470" observedRunningTime="2026-02-17 00:42:16.10934103 +0000 UTC m=+1162.125150428" watchObservedRunningTime="2026-02-17 00:42:16.117219368 +0000 UTC m=+1162.133028766" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.132927 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gp5nc" podStartSLOduration=2.78955013 podStartE2EDuration="6.132909004s" podCreationTimestamp="2026-02-17 00:42:10 +0000 UTC" firstStartedPulling="2026-02-17 00:42:11.757446298 +0000 UTC m=+1157.773255706" lastFinishedPulling="2026-02-17 00:42:15.100805182 +0000 UTC m=+1161.116614580" observedRunningTime="2026-02-17 00:42:16.124746597 +0000 UTC m=+1162.140555995" watchObservedRunningTime="2026-02-17 00:42:16.132909004 +0000 UTC m=+1162.148718402" Feb 17 00:42:16 crc kubenswrapper[4805]: I0217 00:42:16.582245 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 00:42:17 crc kubenswrapper[4805]: I0217 00:42:17.500353 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 00:42:17 crc kubenswrapper[4805]: I0217 00:42:17.618903 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.075987 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85b021d-db5c-4716-b94f-2198c439c614" containerID="bb502dc8418948fb62a847fb824f014bbef28c073d83be4d45970a185986f097" exitCode=0 Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.076126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f85b021d-db5c-4716-b94f-2198c439c614","Type":"ContainerDied","Data":"bb502dc8418948fb62a847fb824f014bbef28c073d83be4d45970a185986f097"} Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.077864 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cc2653c-ccd4-46b3-993c-2447efa79c98" containerID="5ce489f1a5c17e8f42d5825136a8a8dbfdd0e53a98e83b60d3c6e1408cf021c0" exitCode=0 Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.077953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cc2653c-ccd4-46b3-993c-2447efa79c98","Type":"ContainerDied","Data":"5ce489f1a5c17e8f42d5825136a8a8dbfdd0e53a98e83b60d3c6e1408cf021c0"} Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.078800 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.113023 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.113669 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.121727 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 00:42:18 crc kubenswrapper[4805]: I0217 00:42:18.188434 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.088085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2cc2653c-ccd4-46b3-993c-2447efa79c98","Type":"ContainerStarted","Data":"3e787b0c8b067cb06e55a458dc2bc879efe820c725e6f29348a34b5a7745a4f7"} Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.091070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f85b021d-db5c-4716-b94f-2198c439c614","Type":"ContainerStarted","Data":"8fa0990b274bf828802899d1592d124d2a933cf5acabe6c74ef6f0b5d676710c"} Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.128282 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.373322591 podStartE2EDuration="41.128251017s" podCreationTimestamp="2026-02-17 00:41:38 +0000 UTC" firstStartedPulling="2026-02-17 00:41:52.956099811 +0000 UTC m=+1138.971909209" lastFinishedPulling="2026-02-17 00:42:07.711028217 +0000 UTC m=+1153.726837635" observedRunningTime="2026-02-17 00:42:19.118934418 +0000 UTC m=+1165.134743866" watchObservedRunningTime="2026-02-17 00:42:19.128251017 +0000 UTC m=+1165.144060455" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.159189 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.189355 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.690312739 podStartE2EDuration="40.189317782s" podCreationTimestamp="2026-02-17 00:41:39 +0000 UTC" firstStartedPulling="2026-02-17 00:41:53.426603041 +0000 UTC m=+1139.442412439" lastFinishedPulling="2026-02-17 00:42:07.925608074 +0000 UTC m=+1153.941417482" observedRunningTime="2026-02-17 00:42:19.157951971 +0000 UTC m=+1165.173761409" watchObservedRunningTime="2026-02-17 00:42:19.189317782 +0000 UTC m=+1165.205127180" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.350592 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 00:42:19 crc kubenswrapper[4805]: E0217 00:42:19.350983 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" containerName="init" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.351004 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" containerName="init" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.351155 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af3e2a3-582f-4ae9-a637-a2ec3667c9f9" containerName="init" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.352221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.354796 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.355156 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.355780 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nzrgp" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.355962 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.368179 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415474 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415680 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtzb\" (UniqueName: \"kubernetes.io/projected/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-kube-api-access-6gtzb\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415756 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-scripts\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.415789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-config\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gtzb\" (UniqueName: \"kubernetes.io/projected/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-kube-api-access-6gtzb\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517625 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517644 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-scripts\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-config\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.517765 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.519179 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.519405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-config\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.519554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-scripts\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.524510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.524813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.525383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.537081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gtzb\" (UniqueName: \"kubernetes.io/projected/106aacfc-bb6d-46b1-b61b-35ee9f84e1d3-kube-api-access-6gtzb\") pod \"ovn-northd-0\" (UID: \"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3\") " pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.678409 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.683393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 00:42:19 crc kubenswrapper[4805]: I0217 00:42:19.683445 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 00:42:20 crc kubenswrapper[4805]: I0217 00:42:20.112283 4805 generic.go:334] "Generic (PLEG): container finished" podID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerID="b2ec6aba0a414f7c3f330c820f068c7222c2a2073eb826738cfea615cea07ffd" exitCode=0 Feb 17 00:42:20 crc kubenswrapper[4805]: I0217 00:42:20.112630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerDied","Data":"b2ec6aba0a414f7c3f330c820f068c7222c2a2073eb826738cfea615cea07ffd"} Feb 17 00:42:20 crc kubenswrapper[4805]: I0217 00:42:20.141248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.066764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.067172 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.127695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3","Type":"ContainerStarted","Data":"0f241f873df7e1c58457bdf247b8dbee724bb041c45f42281ac343472d66952d"} Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.250930 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.309526 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.309736 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="dnsmasq-dns" containerID="cri-o://71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165" gracePeriod=10 Feb 17 00:42:21 crc kubenswrapper[4805]: I0217 00:42:21.982651 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.067418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8wp\" (UniqueName: \"kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp\") pod \"3996a68a-13de-4796-bb04-670cb7288b6d\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.067621 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config\") pod \"3996a68a-13de-4796-bb04-670cb7288b6d\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.067742 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc\") pod \"3996a68a-13de-4796-bb04-670cb7288b6d\" (UID: \"3996a68a-13de-4796-bb04-670cb7288b6d\") " Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.073297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp" (OuterVolumeSpecName: "kube-api-access-6k8wp") pod "3996a68a-13de-4796-bb04-670cb7288b6d" (UID: "3996a68a-13de-4796-bb04-670cb7288b6d"). InnerVolumeSpecName "kube-api-access-6k8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.114688 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3996a68a-13de-4796-bb04-670cb7288b6d" (UID: "3996a68a-13de-4796-bb04-670cb7288b6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.137688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3","Type":"ContainerStarted","Data":"8b633d158de03c875ecaf1bd911ae9cfcf1e39f37d7bca158588cf1b3c52ce70"} Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.137726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"106aacfc-bb6d-46b1-b61b-35ee9f84e1d3","Type":"ContainerStarted","Data":"3ffd1925f09fad7c5337f6afa7843dfeb2d84e63c3f0c9f1eb2234579b88a4d7"} Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.138236 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.139511 4805 generic.go:334] "Generic (PLEG): container finished" podID="3996a68a-13de-4796-bb04-670cb7288b6d" containerID="71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165" exitCode=0 Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.139542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" event={"ID":"3996a68a-13de-4796-bb04-670cb7288b6d","Type":"ContainerDied","Data":"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165"} Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.139567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" event={"ID":"3996a68a-13de-4796-bb04-670cb7288b6d","Type":"ContainerDied","Data":"55aa3d90dfb2e12a08b5b225728a204b4d75b391057a1ad29d62e0219ebc7319"} Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.139637 4805 scope.go:117] "RemoveContainer" containerID="71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.139667 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qxrvd" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.144564 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config" (OuterVolumeSpecName: "config") pod "3996a68a-13de-4796-bb04-670cb7288b6d" (UID: "3996a68a-13de-4796-bb04-670cb7288b6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.169779 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.169805 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8wp\" (UniqueName: \"kubernetes.io/projected/3996a68a-13de-4796-bb04-670cb7288b6d-kube-api-access-6k8wp\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.169814 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3996a68a-13de-4796-bb04-670cb7288b6d-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.170273 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.201163033 podStartE2EDuration="3.170251295s" podCreationTimestamp="2026-02-17 00:42:19 +0000 UTC" firstStartedPulling="2026-02-17 00:42:20.154900858 +0000 UTC m=+1166.170710256" lastFinishedPulling="2026-02-17 00:42:21.12398912 +0000 UTC m=+1167.139798518" observedRunningTime="2026-02-17 00:42:22.166613634 +0000 UTC m=+1168.182423032" watchObservedRunningTime="2026-02-17 00:42:22.170251295 +0000 UTC m=+1168.186060693" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.226571 4805 scope.go:117] "RemoveContainer" containerID="718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.254798 4805 scope.go:117] "RemoveContainer" containerID="71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165" Feb 17 00:42:22 crc kubenswrapper[4805]: E0217 00:42:22.255399 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165\": container with ID starting with 71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165 not found: ID does not exist" containerID="71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.255431 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165"} err="failed to get container status \"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165\": rpc error: code = NotFound desc = could not find container \"71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165\": container with ID starting with 71e8687edbc8200a7e9ba75e7d9c065649518a3ce37c9fbb7930cbca837f9165 not found: ID does not exist" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.255449 4805 scope.go:117] "RemoveContainer" containerID="718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5" Feb 17 00:42:22 crc kubenswrapper[4805]: E0217 00:42:22.255778 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5\": container with ID starting with 718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5 not found: ID does not exist" containerID="718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.255798 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5"} err="failed to get container status \"718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5\": rpc error: code = NotFound desc = could not find container \"718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5\": container with ID starting with 718cd9783415dfbecdf0cac1836faa4aa2b9871ec1714e89f3895a71053842d5 not found: ID does not exist" Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.473463 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.483739 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qxrvd"] Feb 17 00:42:22 crc kubenswrapper[4805]: I0217 00:42:22.802137 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" path="/var/lib/kubelet/pods/3996a68a-13de-4796-bb04-670cb7288b6d/volumes" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.444837 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.665430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.675003 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:42:23 crc kubenswrapper[4805]: E0217 00:42:23.675424 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="dnsmasq-dns" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.675440 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="dnsmasq-dns" Feb 17 00:42:23 crc kubenswrapper[4805]: E0217 00:42:23.675461 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="init" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.675467 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="init" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.675612 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3996a68a-13de-4796-bb04-670cb7288b6d" containerName="dnsmasq-dns" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.676506 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.696752 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.815428 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppm2p\" (UniqueName: \"kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.815480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.815566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.815625 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.815640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.856810 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.917604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.917768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.917798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.917862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppm2p\" (UniqueName: \"kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.917892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.919501 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.920275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.920483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.922045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.945211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppm2p\" (UniqueName: \"kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p\") pod \"dnsmasq-dns-698758b865-m2qjt\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:23 crc kubenswrapper[4805]: I0217 00:42:23.990425 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.043820 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.528609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.854053 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.884504 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.884706 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.886933 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.887074 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.887550 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.887664 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-hm5zx" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-cache\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-lock\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de228348-37d1-4ec0-9a47-11f4d895e6d6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939646 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:24 crc kubenswrapper[4805]: I0217 00:42:24.939668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l956\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-kube-api-access-7l956\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l956\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-kube-api-access-7l956\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-cache\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-lock\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de228348-37d1-4ec0-9a47-11f4d895e6d6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.041557 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.041570 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.041635 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:25.541622677 +0000 UTC m=+1171.557432075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.041999 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.042054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-lock\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.042379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/de228348-37d1-4ec0-9a47-11f4d895e6d6-cache\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.058860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l956\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-kube-api-access-7l956\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.060750 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de228348-37d1-4ec0-9a47-11f4d895e6d6-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.063580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.180689 4805 generic.go:334] "Generic (PLEG): container finished" podID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerID="1ef549c95fc1baaa43697702641077555045e0c7ed26ca1fbef6134366651cce" exitCode=0 Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.180775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-m2qjt" event={"ID":"6dd9ba13-24f3-40a0-8354-a9e38c7d1368","Type":"ContainerDied","Data":"1ef549c95fc1baaa43697702641077555045e0c7ed26ca1fbef6134366651cce"} Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.180811 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-m2qjt" event={"ID":"6dd9ba13-24f3-40a0-8354-a9e38c7d1368","Type":"ContainerStarted","Data":"3a128f5f4b6089e0b56e25d3da882ca4f329dc1b755d48311c3e2a42879b8f95"} Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.184525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c79f087-7a87-405e-8a91-8450f22de65d","Type":"ContainerStarted","Data":"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8"} Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.185230 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.215160 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.233242616 podStartE2EDuration="42.215144794s" podCreationTimestamp="2026-02-17 00:41:43 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.267623419 +0000 UTC m=+1140.283432827" lastFinishedPulling="2026-02-17 00:42:24.249525607 +0000 UTC m=+1170.265335005" observedRunningTime="2026-02-17 00:42:25.214782344 +0000 UTC m=+1171.230591742" watchObservedRunningTime="2026-02-17 00:42:25.215144794 +0000 UTC m=+1171.230954192" Feb 17 00:42:25 crc kubenswrapper[4805]: I0217 00:42:25.550823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.551030 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.551134 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:25 crc kubenswrapper[4805]: E0217 00:42:25.551186 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:26.551170721 +0000 UTC m=+1172.566980119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.213772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-m2qjt" event={"ID":"6dd9ba13-24f3-40a0-8354-a9e38c7d1368","Type":"ContainerStarted","Data":"4ca0aba97e08c0fe815e72a9d3039ae9f0f2455400079df63e2fbde3b26ef4ec"} Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.214065 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.242851 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podStartSLOduration=3.242829602 podStartE2EDuration="3.242829602s" podCreationTimestamp="2026-02-17 00:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:26.236798435 +0000 UTC m=+1172.252607843" watchObservedRunningTime="2026-02-17 00:42:26.242829602 +0000 UTC m=+1172.258639010" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.576677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:26 crc kubenswrapper[4805]: E0217 00:42:26.577208 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:26 crc kubenswrapper[4805]: E0217 00:42:26.577223 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:26 crc kubenswrapper[4805]: E0217 00:42:26.577266 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:28.577251136 +0000 UTC m=+1174.593060534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.814373 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7a5e-account-create-update-mcmp6"] Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.816429 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.818402 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.840722 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7a5e-account-create-update-mcmp6"] Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.884046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bd8q\" (UniqueName: \"kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.884236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.907041 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dspfd"] Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.908225 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dspfd" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.920882 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dspfd"] Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.986837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz99\" (UniqueName: \"kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.986976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bd8q\" (UniqueName: \"kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.987071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.987133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:26 crc kubenswrapper[4805]: I0217 00:42:26.989881 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.018309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bd8q\" (UniqueName: \"kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q\") pod \"glance-7a5e-account-create-update-mcmp6\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.088790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tz99\" (UniqueName: \"kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.088930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.089905 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.105218 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tz99\" (UniqueName: \"kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99\") pod \"glance-db-create-dspfd\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " pod="openstack/glance-db-create-dspfd" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.138439 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:27 crc kubenswrapper[4805]: I0217 00:42:27.233600 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dspfd" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.329173 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rx2dm"] Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.332284 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.340626 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.355579 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rx2dm"] Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.422676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.423179 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4p9l\" (UniqueName: \"kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.525280 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4p9l\" (UniqueName: \"kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.525414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.526170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.552178 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4p9l\" (UniqueName: \"kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l\") pod \"root-account-create-update-rx2dm\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.626709 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:28 crc kubenswrapper[4805]: E0217 00:42:28.627046 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:28 crc kubenswrapper[4805]: E0217 00:42:28.627085 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:28 crc kubenswrapper[4805]: E0217 00:42:28.627163 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:32.627143263 +0000 UTC m=+1178.642952671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.655010 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.808582 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-c298m"] Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.809885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.815829 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.815936 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.816075 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.819432 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c298m"] Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.932750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.932951 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.933010 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.933256 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.933370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dhvk\" (UniqueName: \"kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.933442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:28 crc kubenswrapper[4805]: I0217 00:42:28.933473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dhvk\" (UniqueName: \"kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.035883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.036268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.037422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.037638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.040455 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.040532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.040771 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.052631 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dhvk\" (UniqueName: \"kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk\") pod \"swift-ring-rebalance-c298m\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.130715 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.586796 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7a5e-account-create-update-mcmp6"] Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.598083 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dspfd"] Feb 17 00:42:29 crc kubenswrapper[4805]: W0217 00:42:29.610696 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef27e931_15d7_45e2_ae8d_cd31c9fffdd5.slice/crio-0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f WatchSource:0}: Error finding container 0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f: Status 404 returned error can't find the container with id 0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.767012 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rx2dm"] Feb 17 00:42:29 crc kubenswrapper[4805]: W0217 00:42:29.785031 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16d712a5_c96f_4f52_b857_210ce226090e.slice/crio-01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319 WatchSource:0}: Error finding container 01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319: Status 404 returned error can't find the container with id 01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319 Feb 17 00:42:29 crc kubenswrapper[4805]: I0217 00:42:29.863795 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c298m"] Feb 17 00:42:29 crc kubenswrapper[4805]: W0217 00:42:29.870242 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8150553f_2c0e_4371_9b0d_22364c3c9db4.slice/crio-df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d WatchSource:0}: Error finding container df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d: Status 404 returned error can't find the container with id df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.252803 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rx2dm" event={"ID":"16d712a5-c96f-4f52-b857-210ce226090e","Type":"ContainerStarted","Data":"7ac35363fda0f2081de48eb146c51da54e39e0d7dc0b2f422289f5f3444be076"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.253121 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rx2dm" event={"ID":"16d712a5-c96f-4f52-b857-210ce226090e","Type":"ContainerStarted","Data":"01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.254205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c298m" event={"ID":"8150553f-2c0e-4371-9b0d-22364c3c9db4","Type":"ContainerStarted","Data":"df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.255749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dspfd" event={"ID":"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5","Type":"ContainerStarted","Data":"94aa67b1a9e958378c25e37f711d5aab1882d87a3f13f2dfd363f6fff074092f"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.255820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dspfd" event={"ID":"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5","Type":"ContainerStarted","Data":"0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.258481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerStarted","Data":"4a484a728298a20e1b9848c9f9e613d8f0b2cde3abff5c17f26d492acef20f12"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.260065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a5e-account-create-update-mcmp6" event={"ID":"7b3669f3-fc93-4d03-a114-3de9f6385fc5","Type":"ContainerStarted","Data":"5989ac843b1186af3b554fbdd6da3eee35a1dd814f649d566427f624c72bf250"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.260089 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a5e-account-create-update-mcmp6" event={"ID":"7b3669f3-fc93-4d03-a114-3de9f6385fc5","Type":"ContainerStarted","Data":"d535b51926130475314b853f4cba8047902f9ae3ec04be907c76b7f8f3df1ac6"} Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.282807 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rx2dm" podStartSLOduration=2.282790415 podStartE2EDuration="2.282790415s" podCreationTimestamp="2026-02-17 00:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:30.274467884 +0000 UTC m=+1176.290277282" watchObservedRunningTime="2026-02-17 00:42:30.282790415 +0000 UTC m=+1176.298599803" Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.298028 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dspfd" podStartSLOduration=4.298010757 podStartE2EDuration="4.298010757s" podCreationTimestamp="2026-02-17 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:30.296436083 +0000 UTC m=+1176.312245481" watchObservedRunningTime="2026-02-17 00:42:30.298010757 +0000 UTC m=+1176.313820155" Feb 17 00:42:30 crc kubenswrapper[4805]: I0217 00:42:30.983572 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-68cc555589-d9q87" podUID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" containerName="console" containerID="cri-o://91906243153f670bf0b208f90581174f31e99303492bb1dbf70ae40a2be7395f" gracePeriod=15 Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.272808 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-d9q87_706bb0a5-075b-4a4e-93b1-ca1da7c16756/console/0.log" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.272854 4805 generic.go:334] "Generic (PLEG): container finished" podID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" containerID="91906243153f670bf0b208f90581174f31e99303492bb1dbf70ae40a2be7395f" exitCode=2 Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.272908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-d9q87" event={"ID":"706bb0a5-075b-4a4e-93b1-ca1da7c16756","Type":"ContainerDied","Data":"91906243153f670bf0b208f90581174f31e99303492bb1dbf70ae40a2be7395f"} Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.276828 4805 generic.go:334] "Generic (PLEG): container finished" podID="ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" containerID="94aa67b1a9e958378c25e37f711d5aab1882d87a3f13f2dfd363f6fff074092f" exitCode=0 Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.276903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dspfd" event={"ID":"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5","Type":"ContainerDied","Data":"94aa67b1a9e958378c25e37f711d5aab1882d87a3f13f2dfd363f6fff074092f"} Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.278860 4805 generic.go:334] "Generic (PLEG): container finished" podID="7b3669f3-fc93-4d03-a114-3de9f6385fc5" containerID="5989ac843b1186af3b554fbdd6da3eee35a1dd814f649d566427f624c72bf250" exitCode=0 Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.278950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a5e-account-create-update-mcmp6" event={"ID":"7b3669f3-fc93-4d03-a114-3de9f6385fc5","Type":"ContainerDied","Data":"5989ac843b1186af3b554fbdd6da3eee35a1dd814f649d566427f624c72bf250"} Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.281815 4805 generic.go:334] "Generic (PLEG): container finished" podID="16d712a5-c96f-4f52-b857-210ce226090e" containerID="7ac35363fda0f2081de48eb146c51da54e39e0d7dc0b2f422289f5f3444be076" exitCode=0 Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.281852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rx2dm" event={"ID":"16d712a5-c96f-4f52-b857-210ce226090e","Type":"ContainerDied","Data":"7ac35363fda0f2081de48eb146c51da54e39e0d7dc0b2f422289f5f3444be076"} Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.307725 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7a5e-account-create-update-mcmp6" podStartSLOduration=5.307698887 podStartE2EDuration="5.307698887s" podCreationTimestamp="2026-02-17 00:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:30.333469281 +0000 UTC m=+1176.349278689" watchObservedRunningTime="2026-02-17 00:42:31.307698887 +0000 UTC m=+1177.323508295" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.552359 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-d9q87_706bb0a5-075b-4a4e-93b1-ca1da7c16756/console/0.log" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.552421 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.600859 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.600967 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgmb\" (UniqueName: \"kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.601078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.601956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca" (OuterVolumeSpecName: "service-ca") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.602634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.604592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.604925 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.605458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config" (OuterVolumeSpecName: "console-config") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.605500 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.605553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert\") pod \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\" (UID: \"706bb0a5-075b-4a4e-93b1-ca1da7c16756\") " Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.606103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.606874 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.606909 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.606922 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.606935 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/706bb0a5-075b-4a4e-93b1-ca1da7c16756-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.608312 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.608286 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.612554 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb" (OuterVolumeSpecName: "kube-api-access-wbgmb") pod "706bb0a5-075b-4a4e-93b1-ca1da7c16756" (UID: "706bb0a5-075b-4a4e-93b1-ca1da7c16756"). InnerVolumeSpecName "kube-api-access-wbgmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.709449 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgmb\" (UniqueName: \"kubernetes.io/projected/706bb0a5-075b-4a4e-93b1-ca1da7c16756-kube-api-access-wbgmb\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.709492 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:31 crc kubenswrapper[4805]: I0217 00:42:31.709636 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/706bb0a5-075b-4a4e-93b1-ca1da7c16756-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.291588 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-d9q87_706bb0a5-075b-4a4e-93b1-ca1da7c16756/console/0.log" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.291682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-d9q87" event={"ID":"706bb0a5-075b-4a4e-93b1-ca1da7c16756","Type":"ContainerDied","Data":"ed2f53cfe465b2aa306b4ff38645378653f0d726d2a2f11cdf88a7a454242b9f"} Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.291708 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-d9q87" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.291716 4805 scope.go:117] "RemoveContainer" containerID="91906243153f670bf0b208f90581174f31e99303492bb1dbf70ae40a2be7395f" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.295452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerStarted","Data":"fbae9e878feb63f82315f0132657cf816b416945cc3d258c27f8087be798bcef"} Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.323635 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.329818 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-68cc555589-d9q87"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.485774 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2zdbb"] Feb 17 00:42:32 crc kubenswrapper[4805]: E0217 00:42:32.486149 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" containerName="console" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.486166 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" containerName="console" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.486317 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" containerName="console" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.486903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.501900 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2zdbb"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.539243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrx4n\" (UniqueName: \"kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.539842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.614452 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c85-account-create-update-xt2cz"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.616518 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.619354 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.627843 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c85-account-create-update-xt2cz"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.641737 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrx4n\" (UniqueName: \"kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.641815 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.641874 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.642752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: E0217 00:42:32.643425 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:32 crc kubenswrapper[4805]: E0217 00:42:32.643445 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:32 crc kubenswrapper[4805]: E0217 00:42:32.643487 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:40.64347285 +0000 UTC m=+1186.659282248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.659419 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrx4n\" (UniqueName: \"kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n\") pod \"keystone-db-create-2zdbb\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.744801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.745096 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdn7r\" (UniqueName: \"kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.815296 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.820129 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706bb0a5-075b-4a4e-93b1-ca1da7c16756" path="/var/lib/kubelet/pods/706bb0a5-075b-4a4e-93b1-ca1da7c16756/volumes" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.821182 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-fsxmm"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.835026 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fsxmm"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.835089 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3c43-account-create-update-wqp2f"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.835149 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.837273 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3c43-account-create-update-wqp2f"] Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.837359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.839559 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.846997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdn7r\" (UniqueName: \"kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.847108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.853464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.896222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdn7r\" (UniqueName: \"kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r\") pod \"keystone-7c85-account-create-update-xt2cz\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.945807 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.948837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.949074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdz9\" (UniqueName: \"kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.949105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:32 crc kubenswrapper[4805]: I0217 00:42:32.949133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqd6\" (UniqueName: \"kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.051488 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdz9\" (UniqueName: \"kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.051947 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.052154 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnqd6\" (UniqueName: \"kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.052593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.052968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.054475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.090958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnqd6\" (UniqueName: \"kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6\") pod \"placement-3c43-account-create-update-wqp2f\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.096000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdz9\" (UniqueName: \"kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9\") pod \"placement-db-create-fsxmm\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.170761 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.200545 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dspfd" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.219346 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.256618 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tz99\" (UniqueName: \"kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99\") pod \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.256914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts\") pod \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\" (UID: \"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5\") " Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.257598 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" (UID: "ef27e931-15d7-45e2-ae8d-cd31c9fffdd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.260498 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.293810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99" (OuterVolumeSpecName: "kube-api-access-4tz99") pod "ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" (UID: "ef27e931-15d7-45e2-ae8d-cd31c9fffdd5"). InnerVolumeSpecName "kube-api-access-4tz99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.307464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dspfd" event={"ID":"ef27e931-15d7-45e2-ae8d-cd31c9fffdd5","Type":"ContainerDied","Data":"0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f"} Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.307529 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0668aab49aa383a99f723a89f4884da23d25341cfa8fa658ce44e5b7f7829a9f" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.307749 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dspfd" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.362591 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tz99\" (UniqueName: \"kubernetes.io/projected/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5-kube-api-access-4tz99\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.461180 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fq4tj"] Feb 17 00:42:33 crc kubenswrapper[4805]: E0217 00:42:33.461565 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" containerName="mariadb-database-create" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.461578 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" containerName="mariadb-database-create" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.461757 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" containerName="mariadb-database-create" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.462544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.485860 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fq4tj"] Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.566240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.566614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r88x\" (UniqueName: \"kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.605525 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.672846 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-3992-account-create-update-h2vts"] Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.674723 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.674899 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r88x\" (UniqueName: \"kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.675217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.676486 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.678179 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.684798 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3992-account-create-update-h2vts"] Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.706294 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r88x\" (UniqueName: \"kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x\") pod \"mysqld-exporter-openstack-db-create-fq4tj\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.777079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4md\" (UniqueName: \"kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.777122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.781697 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.879515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4md\" (UniqueName: \"kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.880670 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.882717 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.899663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4md\" (UniqueName: \"kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md\") pod \"mysqld-exporter-3992-account-create-update-h2vts\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:33 crc kubenswrapper[4805]: I0217 00:42:33.992481 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:34 crc kubenswrapper[4805]: I0217 00:42:34.045508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:42:34 crc kubenswrapper[4805]: I0217 00:42:34.139131 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:34 crc kubenswrapper[4805]: I0217 00:42:34.139370 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="dnsmasq-dns" containerID="cri-o://cbbdbb93028a3695caef28dec2d7dd4bf33469bd8219924081b3a82e467ebf39" gracePeriod=10 Feb 17 00:42:36 crc kubenswrapper[4805]: I0217 00:42:36.002448 4805 generic.go:334] "Generic (PLEG): container finished" podID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerID="cbbdbb93028a3695caef28dec2d7dd4bf33469bd8219924081b3a82e467ebf39" exitCode=0 Feb 17 00:42:36 crc kubenswrapper[4805]: I0217 00:42:36.002733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" event={"ID":"94d37dc7-7d79-4fcf-8971-743ef480eedd","Type":"ContainerDied","Data":"cbbdbb93028a3695caef28dec2d7dd4bf33469bd8219924081b3a82e467ebf39"} Feb 17 00:42:36 crc kubenswrapper[4805]: I0217 00:42:36.249914 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Feb 17 00:42:37 crc kubenswrapper[4805]: I0217 00:42:37.852027 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:37 crc kubenswrapper[4805]: I0217 00:42:37.864575 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.001473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts\") pod \"16d712a5-c96f-4f52-b857-210ce226090e\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.001865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts\") pod \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.001933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bd8q\" (UniqueName: \"kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q\") pod \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\" (UID: \"7b3669f3-fc93-4d03-a114-3de9f6385fc5\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.001990 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4p9l\" (UniqueName: \"kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l\") pod \"16d712a5-c96f-4f52-b857-210ce226090e\" (UID: \"16d712a5-c96f-4f52-b857-210ce226090e\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.002407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16d712a5-c96f-4f52-b857-210ce226090e" (UID: "16d712a5-c96f-4f52-b857-210ce226090e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.002488 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b3669f3-fc93-4d03-a114-3de9f6385fc5" (UID: "7b3669f3-fc93-4d03-a114-3de9f6385fc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.010108 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q" (OuterVolumeSpecName: "kube-api-access-5bd8q") pod "7b3669f3-fc93-4d03-a114-3de9f6385fc5" (UID: "7b3669f3-fc93-4d03-a114-3de9f6385fc5"). InnerVolumeSpecName "kube-api-access-5bd8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.011181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l" (OuterVolumeSpecName: "kube-api-access-q4p9l") pod "16d712a5-c96f-4f52-b857-210ce226090e" (UID: "16d712a5-c96f-4f52-b857-210ce226090e"). InnerVolumeSpecName "kube-api-access-q4p9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.076242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rx2dm" event={"ID":"16d712a5-c96f-4f52-b857-210ce226090e","Type":"ContainerDied","Data":"01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319"} Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.076293 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f6c8ca101f493cf77f031b40c017d6292c408a92ccf0718f0391caf1bb2319" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.077833 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rx2dm" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.080259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7a5e-account-create-update-mcmp6" event={"ID":"7b3669f3-fc93-4d03-a114-3de9f6385fc5","Type":"ContainerDied","Data":"d535b51926130475314b853f4cba8047902f9ae3ec04be907c76b7f8f3df1ac6"} Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.080318 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d535b51926130475314b853f4cba8047902f9ae3ec04be907c76b7f8f3df1ac6" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.080380 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7a5e-account-create-update-mcmp6" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.105836 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16d712a5-c96f-4f52-b857-210ce226090e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.105869 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b3669f3-fc93-4d03-a114-3de9f6385fc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.105882 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bd8q\" (UniqueName: \"kubernetes.io/projected/7b3669f3-fc93-4d03-a114-3de9f6385fc5-kube-api-access-5bd8q\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.105895 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4p9l\" (UniqueName: \"kubernetes.io/projected/16d712a5-c96f-4f52-b857-210ce226090e-kube-api-access-q4p9l\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.265968 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.379616 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2zdbb"] Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.411966 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc\") pod \"94d37dc7-7d79-4fcf-8971-743ef480eedd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.412035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb\") pod \"94d37dc7-7d79-4fcf-8971-743ef480eedd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.412114 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hfmz\" (UniqueName: \"kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz\") pod \"94d37dc7-7d79-4fcf-8971-743ef480eedd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.412134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb\") pod \"94d37dc7-7d79-4fcf-8971-743ef480eedd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.412168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config\") pod \"94d37dc7-7d79-4fcf-8971-743ef480eedd\" (UID: \"94d37dc7-7d79-4fcf-8971-743ef480eedd\") " Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.424924 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz" (OuterVolumeSpecName: "kube-api-access-5hfmz") pod "94d37dc7-7d79-4fcf-8971-743ef480eedd" (UID: "94d37dc7-7d79-4fcf-8971-743ef480eedd"). InnerVolumeSpecName "kube-api-access-5hfmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.468288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94d37dc7-7d79-4fcf-8971-743ef480eedd" (UID: "94d37dc7-7d79-4fcf-8971-743ef480eedd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.468947 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94d37dc7-7d79-4fcf-8971-743ef480eedd" (UID: "94d37dc7-7d79-4fcf-8971-743ef480eedd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.478020 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config" (OuterVolumeSpecName: "config") pod "94d37dc7-7d79-4fcf-8971-743ef480eedd" (UID: "94d37dc7-7d79-4fcf-8971-743ef480eedd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.479499 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94d37dc7-7d79-4fcf-8971-743ef480eedd" (UID: "94d37dc7-7d79-4fcf-8971-743ef480eedd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.513930 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.513963 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.513974 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hfmz\" (UniqueName: \"kubernetes.io/projected/94d37dc7-7d79-4fcf-8971-743ef480eedd-kube-api-access-5hfmz\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.513983 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.513991 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d37dc7-7d79-4fcf-8971-743ef480eedd-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.708106 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fsxmm"] Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.730421 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fq4tj"] Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.875084 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3c43-account-create-update-wqp2f"] Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.884066 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c85-account-create-update-xt2cz"] Feb 17 00:42:38 crc kubenswrapper[4805]: I0217 00:42:38.893512 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3992-account-create-update-h2vts"] Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.092475 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" event={"ID":"94d37dc7-7d79-4fcf-8971-743ef480eedd","Type":"ContainerDied","Data":"0c47682855bddf1977fad33659828f077fb63966693b84fb3bd4e0bb2d2d0f89"} Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.092543 4805 scope.go:117] "RemoveContainer" containerID="cbbdbb93028a3695caef28dec2d7dd4bf33469bd8219924081b3a82e467ebf39" Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.092708 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-xl4vd" Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.121155 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.129675 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-xl4vd"] Feb 17 00:42:39 crc kubenswrapper[4805]: W0217 00:42:39.297869 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0711d3_a423_437c_9de6_9c0be097d3bd.slice/crio-7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154 WatchSource:0}: Error finding container 7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154: Status 404 returned error can't find the container with id 7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154 Feb 17 00:42:39 crc kubenswrapper[4805]: W0217 00:42:39.300943 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9e10a8c_b19f_4558_acef_2027c30614bf.slice/crio-9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b WatchSource:0}: Error finding container 9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b: Status 404 returned error can't find the container with id 9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b Feb 17 00:42:39 crc kubenswrapper[4805]: W0217 00:42:39.301596 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod844af17c_95de_4afa_8d20_f00cf5195840.slice/crio-632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc WatchSource:0}: Error finding container 632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc: Status 404 returned error can't find the container with id 632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc Feb 17 00:42:39 crc kubenswrapper[4805]: W0217 00:42:39.303909 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06dfed54_f183_46cc_abd4_089a231b2201.slice/crio-74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c WatchSource:0}: Error finding container 74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c: Status 404 returned error can't find the container with id 74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.356883 4805 scope.go:117] "RemoveContainer" containerID="949eb0cabf1023f0c03621ef79beedee7e0c985ddb934e53aa2edad049af5d21" Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.739814 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rx2dm"] Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.748358 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rx2dm"] Feb 17 00:42:39 crc kubenswrapper[4805]: I0217 00:42:39.782175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.102074 4805 generic.go:334] "Generic (PLEG): container finished" podID="06dfed54-f183-46cc-abd4-089a231b2201" containerID="8342baaad8a8d6197c0bfd4880d2722d1db08bcba63aec37af87020e83ead2cd" exitCode=0 Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.102268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2zdbb" event={"ID":"06dfed54-f183-46cc-abd4-089a231b2201","Type":"ContainerDied","Data":"8342baaad8a8d6197c0bfd4880d2722d1db08bcba63aec37af87020e83ead2cd"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.103209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2zdbb" event={"ID":"06dfed54-f183-46cc-abd4-089a231b2201","Type":"ContainerStarted","Data":"74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.105408 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c298m" event={"ID":"8150553f-2c0e-4371-9b0d-22364c3c9db4","Type":"ContainerStarted","Data":"8c72fd9c7b7a0399ac2042449a29653aeb38b5fd5438ecea8eac10b1c319dbae"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.109832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerStarted","Data":"72eaad4e9a592e4510e72c9c7790a5f8918ca4fc0d2e811b99f50f58e14ef105"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.117900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c85-account-create-update-xt2cz" event={"ID":"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba","Type":"ContainerStarted","Data":"7abf8d1a29a20160aeb535c545d2f851a92ce0898aabfda0b32945deda7f54d6"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.118015 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c85-account-create-update-xt2cz" event={"ID":"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba","Type":"ContainerStarted","Data":"a0c5b2ac8f9f328a7fb50cdce6fcefc30af53ab8687405022b89b6f7a1ba9b5d"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.126494 4805 generic.go:334] "Generic (PLEG): container finished" podID="844af17c-95de-4afa-8d20-f00cf5195840" containerID="823eba5b85f3337f7940a009fc5f4ae29680716f2485253d4bfd4b840c130beb" exitCode=0 Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.126666 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fsxmm" event={"ID":"844af17c-95de-4afa-8d20-f00cf5195840","Type":"ContainerDied","Data":"823eba5b85f3337f7940a009fc5f4ae29680716f2485253d4bfd4b840c130beb"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.126958 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fsxmm" event={"ID":"844af17c-95de-4afa-8d20-f00cf5195840","Type":"ContainerStarted","Data":"632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.131954 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd0711d3-a423-437c-9de6-9c0be097d3bd" containerID="2ba0459af916ba902fe0a984f5fa92aa763d8cef98fcc68d34591ef22554358a" exitCode=0 Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.132140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" event={"ID":"dd0711d3-a423-437c-9de6-9c0be097d3bd","Type":"ContainerDied","Data":"2ba0459af916ba902fe0a984f5fa92aa763d8cef98fcc68d34591ef22554358a"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.132264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" event={"ID":"dd0711d3-a423-437c-9de6-9c0be097d3bd","Type":"ContainerStarted","Data":"7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.134651 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" event={"ID":"1db2630f-effd-4730-a324-bbfe90d75a8a","Type":"ContainerStarted","Data":"a8bacb37f646426c210fd86904c602639990f2a74f587708204479d94952154d"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.134917 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" event={"ID":"1db2630f-effd-4730-a324-bbfe90d75a8a","Type":"ContainerStarted","Data":"ae02434c8f037cb8fc10b6caede820bfcf5d15953b9c0e5abcba6a86e64d83ae"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.143013 4805 generic.go:334] "Generic (PLEG): container finished" podID="b9e10a8c-b19f-4558-acef-2027c30614bf" containerID="3d14019d44eabd6cc556a55056d47250f87ef381b09dbcd96137383765874190" exitCode=0 Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.143068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3c43-account-create-update-wqp2f" event={"ID":"b9e10a8c-b19f-4558-acef-2027c30614bf","Type":"ContainerDied","Data":"3d14019d44eabd6cc556a55056d47250f87ef381b09dbcd96137383765874190"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.143097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3c43-account-create-update-wqp2f" event={"ID":"b9e10a8c-b19f-4558-acef-2027c30614bf","Type":"ContainerStarted","Data":"9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b"} Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.144925 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-c298m" podStartSLOduration=2.657237389 podStartE2EDuration="12.144904014s" podCreationTimestamp="2026-02-17 00:42:28 +0000 UTC" firstStartedPulling="2026-02-17 00:42:29.872034732 +0000 UTC m=+1175.887844130" lastFinishedPulling="2026-02-17 00:42:39.359701347 +0000 UTC m=+1185.375510755" observedRunningTime="2026-02-17 00:42:40.13899241 +0000 UTC m=+1186.154801818" watchObservedRunningTime="2026-02-17 00:42:40.144904014 +0000 UTC m=+1186.160713412" Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.160500 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c85-account-create-update-xt2cz" podStartSLOduration=8.160476836 podStartE2EDuration="8.160476836s" podCreationTimestamp="2026-02-17 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:40.159035946 +0000 UTC m=+1186.174845364" watchObservedRunningTime="2026-02-17 00:42:40.160476836 +0000 UTC m=+1186.176286284" Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.201296 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=12.519544253 podStartE2EDuration="57.201281669s" podCreationTimestamp="2026-02-17 00:41:43 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.710691599 +0000 UTC m=+1140.726500997" lastFinishedPulling="2026-02-17 00:42:39.392429005 +0000 UTC m=+1185.408238413" observedRunningTime="2026-02-17 00:42:40.183284869 +0000 UTC m=+1186.199094277" watchObservedRunningTime="2026-02-17 00:42:40.201281669 +0000 UTC m=+1186.217091067" Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.703123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:40 crc kubenswrapper[4805]: E0217 00:42:40.703298 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 00:42:40 crc kubenswrapper[4805]: E0217 00:42:40.703335 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 00:42:40 crc kubenswrapper[4805]: E0217 00:42:40.703387 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift podName:de228348-37d1-4ec0-9a47-11f4d895e6d6 nodeName:}" failed. No retries permitted until 2026-02-17 00:42:56.703369477 +0000 UTC m=+1202.719178875 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift") pod "swift-storage-0" (UID: "de228348-37d1-4ec0-9a47-11f4d895e6d6") : configmap "swift-ring-files" not found Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.799226 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d712a5-c96f-4f52-b857-210ce226090e" path="/var/lib/kubelet/pods/16d712a5-c96f-4f52-b857-210ce226090e/volumes" Feb 17 00:42:40 crc kubenswrapper[4805]: I0217 00:42:40.800159 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" path="/var/lib/kubelet/pods/94d37dc7-7d79-4fcf-8971-743ef480eedd/volumes" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.152768 4805 generic.go:334] "Generic (PLEG): container finished" podID="d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" containerID="7abf8d1a29a20160aeb535c545d2f851a92ce0898aabfda0b32945deda7f54d6" exitCode=0 Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.152829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c85-account-create-update-xt2cz" event={"ID":"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba","Type":"ContainerDied","Data":"7abf8d1a29a20160aeb535c545d2f851a92ce0898aabfda0b32945deda7f54d6"} Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.154416 4805 generic.go:334] "Generic (PLEG): container finished" podID="1db2630f-effd-4730-a324-bbfe90d75a8a" containerID="a8bacb37f646426c210fd86904c602639990f2a74f587708204479d94952154d" exitCode=0 Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.154494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" event={"ID":"1db2630f-effd-4730-a324-bbfe90d75a8a","Type":"ContainerDied","Data":"a8bacb37f646426c210fd86904c602639990f2a74f587708204479d94952154d"} Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.662894 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.831715 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdz9\" (UniqueName: \"kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9\") pod \"844af17c-95de-4afa-8d20-f00cf5195840\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.831785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts\") pod \"844af17c-95de-4afa-8d20-f00cf5195840\" (UID: \"844af17c-95de-4afa-8d20-f00cf5195840\") " Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.833540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "844af17c-95de-4afa-8d20-f00cf5195840" (UID: "844af17c-95de-4afa-8d20-f00cf5195840"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.838476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9" (OuterVolumeSpecName: "kube-api-access-rwdz9") pod "844af17c-95de-4afa-8d20-f00cf5195840" (UID: "844af17c-95de-4afa-8d20-f00cf5195840"). InnerVolumeSpecName "kube-api-access-rwdz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.877387 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.893560 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.904339 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.920126 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.933540 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdz9\" (UniqueName: \"kubernetes.io/projected/844af17c-95de-4afa-8d20-f00cf5195840-kube-api-access-rwdz9\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:41 crc kubenswrapper[4805]: I0217 00:42:41.933575 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/844af17c-95de-4afa-8d20-f00cf5195840-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrx4n\" (UniqueName: \"kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n\") pod \"06dfed54-f183-46cc-abd4-089a231b2201\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts\") pod \"b9e10a8c-b19f-4558-acef-2027c30614bf\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035273 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn4md\" (UniqueName: \"kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md\") pod \"1db2630f-effd-4730-a324-bbfe90d75a8a\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035381 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts\") pod \"dd0711d3-a423-437c-9de6-9c0be097d3bd\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035424 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts\") pod \"06dfed54-f183-46cc-abd4-089a231b2201\" (UID: \"06dfed54-f183-46cc-abd4-089a231b2201\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035459 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r88x\" (UniqueName: \"kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x\") pod \"dd0711d3-a423-437c-9de6-9c0be097d3bd\" (UID: \"dd0711d3-a423-437c-9de6-9c0be097d3bd\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035526 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnqd6\" (UniqueName: \"kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6\") pod \"b9e10a8c-b19f-4558-acef-2027c30614bf\" (UID: \"b9e10a8c-b19f-4558-acef-2027c30614bf\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035593 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts\") pod \"1db2630f-effd-4730-a324-bbfe90d75a8a\" (UID: \"1db2630f-effd-4730-a324-bbfe90d75a8a\") " Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9e10a8c-b19f-4558-acef-2027c30614bf" (UID: "b9e10a8c-b19f-4558-acef-2027c30614bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.035887 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd0711d3-a423-437c-9de6-9c0be097d3bd" (UID: "dd0711d3-a423-437c-9de6-9c0be097d3bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.036413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1db2630f-effd-4730-a324-bbfe90d75a8a" (UID: "1db2630f-effd-4730-a324-bbfe90d75a8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.036946 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e10a8c-b19f-4558-acef-2027c30614bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.036975 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd0711d3-a423-437c-9de6-9c0be097d3bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.036988 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1db2630f-effd-4730-a324-bbfe90d75a8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.038288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06dfed54-f183-46cc-abd4-089a231b2201" (UID: "06dfed54-f183-46cc-abd4-089a231b2201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.039746 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x" (OuterVolumeSpecName: "kube-api-access-8r88x") pod "dd0711d3-a423-437c-9de6-9c0be097d3bd" (UID: "dd0711d3-a423-437c-9de6-9c0be097d3bd"). InnerVolumeSpecName "kube-api-access-8r88x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.040268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md" (OuterVolumeSpecName: "kube-api-access-tn4md") pod "1db2630f-effd-4730-a324-bbfe90d75a8a" (UID: "1db2630f-effd-4730-a324-bbfe90d75a8a"). InnerVolumeSpecName "kube-api-access-tn4md". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.045413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n" (OuterVolumeSpecName: "kube-api-access-zrx4n") pod "06dfed54-f183-46cc-abd4-089a231b2201" (UID: "06dfed54-f183-46cc-abd4-089a231b2201"). InnerVolumeSpecName "kube-api-access-zrx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.048532 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6" (OuterVolumeSpecName: "kube-api-access-dnqd6") pod "b9e10a8c-b19f-4558-acef-2027c30614bf" (UID: "b9e10a8c-b19f-4558-acef-2027c30614bf"). InnerVolumeSpecName "kube-api-access-dnqd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.134962 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-j7v5m"] Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135400 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dfed54-f183-46cc-abd4-089a231b2201" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135420 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dfed54-f183-46cc-abd4-089a231b2201" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="init" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135438 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="init" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135467 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db2630f-effd-4730-a324-bbfe90d75a8a" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135475 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db2630f-effd-4730-a324-bbfe90d75a8a" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135491 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d712a5-c96f-4f52-b857-210ce226090e" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135498 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d712a5-c96f-4f52-b857-210ce226090e" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135514 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e10a8c-b19f-4558-acef-2027c30614bf" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135521 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e10a8c-b19f-4558-acef-2027c30614bf" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135534 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844af17c-95de-4afa-8d20-f00cf5195840" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135541 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="844af17c-95de-4afa-8d20-f00cf5195840" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135552 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="dnsmasq-dns" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135559 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="dnsmasq-dns" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135572 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3669f3-fc93-4d03-a114-3de9f6385fc5" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135579 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3669f3-fc93-4d03-a114-3de9f6385fc5" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: E0217 00:42:42.135593 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0711d3-a423-437c-9de6-9c0be097d3bd" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135600 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0711d3-a423-437c-9de6-9c0be097d3bd" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135788 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="06dfed54-f183-46cc-abd4-089a231b2201" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135826 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="844af17c-95de-4afa-8d20-f00cf5195840" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135845 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d37dc7-7d79-4fcf-8971-743ef480eedd" containerName="dnsmasq-dns" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135873 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d712a5-c96f-4f52-b857-210ce226090e" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135894 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db2630f-effd-4730-a324-bbfe90d75a8a" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135917 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3669f3-fc93-4d03-a114-3de9f6385fc5" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135928 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0711d3-a423-437c-9de6-9c0be097d3bd" containerName="mariadb-database-create" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.135944 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e10a8c-b19f-4558-acef-2027c30614bf" containerName="mariadb-account-create-update" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.138854 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnqd6\" (UniqueName: \"kubernetes.io/projected/b9e10a8c-b19f-4558-acef-2027c30614bf-kube-api-access-dnqd6\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.138888 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrx4n\" (UniqueName: \"kubernetes.io/projected/06dfed54-f183-46cc-abd4-089a231b2201-kube-api-access-zrx4n\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.138897 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn4md\" (UniqueName: \"kubernetes.io/projected/1db2630f-effd-4730-a324-bbfe90d75a8a-kube-api-access-tn4md\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.138911 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06dfed54-f183-46cc-abd4-089a231b2201-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.138920 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r88x\" (UniqueName: \"kubernetes.io/projected/dd0711d3-a423-437c-9de6-9c0be097d3bd-kube-api-access-8r88x\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.139879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.144086 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.144232 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpn7q" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.150683 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j7v5m"] Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.186563 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.187286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3992-account-create-update-h2vts" event={"ID":"1db2630f-effd-4730-a324-bbfe90d75a8a","Type":"ContainerDied","Data":"ae02434c8f037cb8fc10b6caede820bfcf5d15953b9c0e5abcba6a86e64d83ae"} Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.187338 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae02434c8f037cb8fc10b6caede820bfcf5d15953b9c0e5abcba6a86e64d83ae" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.188919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fsxmm" event={"ID":"844af17c-95de-4afa-8d20-f00cf5195840","Type":"ContainerDied","Data":"632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc"} Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.188940 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="632c182fb6dd45ebae9c64b3edace93623a5b58a1743a0fce2a44f19c99fd5dc" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.188992 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fsxmm" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.195624 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3c43-account-create-update-wqp2f" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.195611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3c43-account-create-update-wqp2f" event={"ID":"b9e10a8c-b19f-4558-acef-2027c30614bf","Type":"ContainerDied","Data":"9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b"} Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.195675 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9225af177a352203fda3d8e261f4532f5d2bc881a419f8f415833ab2cc6dc34b" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.197548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2zdbb" event={"ID":"06dfed54-f183-46cc-abd4-089a231b2201","Type":"ContainerDied","Data":"74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c"} Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.197580 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74894be1ec924579f9bc7ec727140fae84b6ab95451f71194dd64009ee2e6b6c" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.197668 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2zdbb" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.202373 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.202376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fq4tj" event={"ID":"dd0711d3-a423-437c-9de6-9c0be097d3bd","Type":"ContainerDied","Data":"7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154"} Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.202841 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb51e772b1cb4c44fbe5dd51ab5b69917cb76fc7c90b104990ace8de2533154" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.265857 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cpgf5" podUID="1fc3dff9-1209-4d8b-8927-96f5ffac33f6" containerName="ovn-controller" probeResult="failure" output=< Feb 17 00:42:42 crc kubenswrapper[4805]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 00:42:42 crc kubenswrapper[4805]: > Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.342261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.342355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4cf2\" (UniqueName: \"kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.342395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.342426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.444407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4cf2\" (UniqueName: \"kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.444510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.444543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.444647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.449039 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.450246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.454510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.460336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4cf2\" (UniqueName: \"kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2\") pod \"glance-db-sync-j7v5m\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.471951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j7v5m" Feb 17 00:42:42 crc kubenswrapper[4805]: I0217 00:42:42.507731 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.647903 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdn7r\" (UniqueName: \"kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r\") pod \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.648399 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts\") pod \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\" (UID: \"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba\") " Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.649068 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" (UID: "d58b7ac7-8a62-4f29-bb0a-7915e01e87ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.652563 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r" (OuterVolumeSpecName: "kube-api-access-jdn7r") pod "d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" (UID: "d58b7ac7-8a62-4f29-bb0a-7915e01e87ba"). InnerVolumeSpecName "kube-api-access-jdn7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.750652 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:42.750699 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdn7r\" (UniqueName: \"kubernetes.io/projected/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba-kube-api-access-jdn7r\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.211003 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c85-account-create-update-xt2cz" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.210997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c85-account-create-update-xt2cz" event={"ID":"d58b7ac7-8a62-4f29-bb0a-7915e01e87ba","Type":"ContainerDied","Data":"a0c5b2ac8f9f328a7fb50cdce6fcefc30af53ab8687405022b89b6f7a1ba9b5d"} Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.211368 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c5b2ac8f9f328a7fb50cdce6fcefc30af53ab8687405022b89b6f7a1ba9b5d" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.214031 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc55b214-5b43-49cd-aadb-967188b34da1" containerID="937219e051ca008592afb84a19bc551c316843281575cc9779fe5a8e5ffe5bd5" exitCode=0 Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.214073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerDied","Data":"937219e051ca008592afb84a19bc551c316843281575cc9779fe5a8e5ffe5bd5"} Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.379123 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5zwqp"] Feb 17 00:42:43 crc kubenswrapper[4805]: E0217 00:42:43.379665 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" containerName="mariadb-account-create-update" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.379686 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" containerName="mariadb-account-create-update" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.379924 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" containerName="mariadb-account-create-update" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.380613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.384187 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.392342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zwqp"] Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.479588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6wh\" (UniqueName: \"kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.479676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.581813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6wh\" (UniqueName: \"kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.581885 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.583120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.601217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6wh\" (UniqueName: \"kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh\") pod \"root-account-create-update-5zwqp\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.777154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j7v5m"] Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.804173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.807868 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hw88l"] Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.809780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.835151 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hw88l"] Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.889365 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.889468 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxj4j\" (UniqueName: \"kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.992333 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.992724 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxj4j\" (UniqueName: \"kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:43 crc kubenswrapper[4805]: I0217 00:42:43.994182 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.001459 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-3d7f-account-create-update-hd6xl"] Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.033289 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3d7f-account-create-update-hd6xl"] Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.033448 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.038046 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.073608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxj4j\" (UniqueName: \"kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j\") pod \"mysqld-exporter-openstack-cell1-db-create-hw88l\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.094258 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.094395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd75\" (UniqueName: \"kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.135713 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.199297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbd75\" (UniqueName: \"kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.199658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.200235 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.236429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbd75\" (UniqueName: \"kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75\") pod \"mysqld-exporter-3d7f-account-create-update-hd6xl\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.270559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerStarted","Data":"8f7b5996bb3baf66a48bffeafa69160fb68716c1e0a3995629306da5bb81fb20"} Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.271579 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.289597 4805 generic.go:334] "Generic (PLEG): container finished" podID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerID="d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c" exitCode=0 Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.289666 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerDied","Data":"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c"} Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.308138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j7v5m" event={"ID":"38464d88-9f3b-485b-872a-98ed2ea8e3be","Type":"ContainerStarted","Data":"076ffc8953438c609efd574e31720b724aa40611838224e3396e02b72ef5a5fe"} Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.351003 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.075233014 podStartE2EDuration="1m8.350982018s" podCreationTimestamp="2026-02-17 00:41:36 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.435815088 +0000 UTC m=+1140.451624486" lastFinishedPulling="2026-02-17 00:42:07.711564102 +0000 UTC m=+1153.727373490" observedRunningTime="2026-02-17 00:42:44.320318656 +0000 UTC m=+1190.336128054" watchObservedRunningTime="2026-02-17 00:42:44.350982018 +0000 UTC m=+1190.366791416" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.372494 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5zwqp"] Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.420662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.745512 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hw88l"] Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.941109 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-3d7f-account-create-update-hd6xl"] Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.958279 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.958317 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:44 crc kubenswrapper[4805]: I0217 00:42:44.961175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:44 crc kubenswrapper[4805]: W0217 00:42:44.965607 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab61f86_d58e_4874_99f0_bd197d671827.slice/crio-0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57 WatchSource:0}: Error finding container 0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57: Status 404 returned error can't find the container with id 0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57 Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.320054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" event={"ID":"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb","Type":"ContainerStarted","Data":"2b410673a8d29b0f411b0b1f4320ff4063117ab09c42d227c985e1750a9a2fca"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.320118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" event={"ID":"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb","Type":"ContainerStarted","Data":"fa69586f7069d256f2ba6b7cfc0430a87e31ccc0c782fc916f5fbbd3abd8d1e7"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.326189 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerStarted","Data":"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.326590 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.330778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" event={"ID":"3ab61f86-d58e-4874-99f0-bd197d671827","Type":"ContainerStarted","Data":"7857cdd60814a3b1196dacd096d320c926e89bf3ae0358b634b2e3bbf5f7b5c0"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.330810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" event={"ID":"3ab61f86-d58e-4874-99f0-bd197d671827","Type":"ContainerStarted","Data":"0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.338598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwqp" event={"ID":"ad49d26f-ba62-4191-bb6c-1fa3a56401cb","Type":"ContainerStarted","Data":"a967e96b4fc9fb26d0d2c908cb214ed3caac2bda655ee5c46048d0e504a60b3a"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.338688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwqp" event={"ID":"ad49d26f-ba62-4191-bb6c-1fa3a56401cb","Type":"ContainerStarted","Data":"edba6d152c42d9f7a67b44b01ae3a251811f841611e92ebb6b50ac66adf06762"} Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.341152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.343513 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" podStartSLOduration=2.343493951 podStartE2EDuration="2.343493951s" podCreationTimestamp="2026-02-17 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:45.337766212 +0000 UTC m=+1191.353575610" watchObservedRunningTime="2026-02-17 00:42:45.343493951 +0000 UTC m=+1191.359303349" Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.369990 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.900796133 podStartE2EDuration="1m9.369966756s" podCreationTimestamp="2026-02-17 00:41:36 +0000 UTC" firstStartedPulling="2026-02-17 00:41:54.456474622 +0000 UTC m=+1140.472284020" lastFinishedPulling="2026-02-17 00:42:07.925645245 +0000 UTC m=+1153.941454643" observedRunningTime="2026-02-17 00:42:45.364666478 +0000 UTC m=+1191.380475876" watchObservedRunningTime="2026-02-17 00:42:45.369966756 +0000 UTC m=+1191.385776154" Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.388216 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" podStartSLOduration=2.388193852 podStartE2EDuration="2.388193852s" podCreationTimestamp="2026-02-17 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:45.382468023 +0000 UTC m=+1191.398277421" watchObservedRunningTime="2026-02-17 00:42:45.388193852 +0000 UTC m=+1191.404003250" Feb 17 00:42:45 crc kubenswrapper[4805]: I0217 00:42:45.401910 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5zwqp" podStartSLOduration=2.401893812 podStartE2EDuration="2.401893812s" podCreationTimestamp="2026-02-17 00:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:45.397164641 +0000 UTC m=+1191.412974039" watchObservedRunningTime="2026-02-17 00:42:45.401893812 +0000 UTC m=+1191.417703200" Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.349512 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ab61f86-d58e-4874-99f0-bd197d671827" containerID="7857cdd60814a3b1196dacd096d320c926e89bf3ae0358b634b2e3bbf5f7b5c0" exitCode=0 Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.349611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" event={"ID":"3ab61f86-d58e-4874-99f0-bd197d671827","Type":"ContainerDied","Data":"7857cdd60814a3b1196dacd096d320c926e89bf3ae0358b634b2e3bbf5f7b5c0"} Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.351261 4805 generic.go:334] "Generic (PLEG): container finished" podID="ad49d26f-ba62-4191-bb6c-1fa3a56401cb" containerID="a967e96b4fc9fb26d0d2c908cb214ed3caac2bda655ee5c46048d0e504a60b3a" exitCode=0 Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.351295 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwqp" event={"ID":"ad49d26f-ba62-4191-bb6c-1fa3a56401cb","Type":"ContainerDied","Data":"a967e96b4fc9fb26d0d2c908cb214ed3caac2bda655ee5c46048d0e504a60b3a"} Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.353376 4805 generic.go:334] "Generic (PLEG): container finished" podID="4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" containerID="2b410673a8d29b0f411b0b1f4320ff4063117ab09c42d227c985e1750a9a2fca" exitCode=0 Feb 17 00:42:46 crc kubenswrapper[4805]: I0217 00:42:46.353440 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" event={"ID":"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb","Type":"ContainerDied","Data":"2b410673a8d29b0f411b0b1f4320ff4063117ab09c42d227c985e1750a9a2fca"} Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.275700 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cpgf5" podUID="1fc3dff9-1209-4d8b-8927-96f5ffac33f6" containerName="ovn-controller" probeResult="failure" output=< Feb 17 00:42:47 crc kubenswrapper[4805]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 00:42:47 crc kubenswrapper[4805]: > Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.291778 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.440665 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dlg8k" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.835196 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cpgf5-config-sctvr"] Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.850228 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.854653 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.867405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5-config-sctvr"] Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9tc\" (UniqueName: \"kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.874574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r9tc\" (UniqueName: \"kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976587 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976616 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976640 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976730 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976949 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.976966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.977889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.978014 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:47 crc kubenswrapper[4805]: I0217 00:42:47.979021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.011404 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r9tc\" (UniqueName: \"kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc\") pod \"ovn-controller-cpgf5-config-sctvr\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.025025 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.077845 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs6wh\" (UniqueName: \"kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh\") pod \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.078005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts\") pod \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\" (UID: \"ad49d26f-ba62-4191-bb6c-1fa3a56401cb\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.081128 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad49d26f-ba62-4191-bb6c-1fa3a56401cb" (UID: "ad49d26f-ba62-4191-bb6c-1fa3a56401cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.093477 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.093905 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="prometheus" containerID="cri-o://4a484a728298a20e1b9848c9f9e613d8f0b2cde3abff5c17f26d492acef20f12" gracePeriod=600 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.093982 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="thanos-sidecar" containerID="cri-o://72eaad4e9a592e4510e72c9c7790a5f8918ca4fc0d2e811b99f50f58e14ef105" gracePeriod=600 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.094039 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="config-reloader" containerID="cri-o://fbae9e878feb63f82315f0132657cf816b416945cc3d258c27f8087be798bcef" gracePeriod=600 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.098555 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh" (OuterVolumeSpecName: "kube-api-access-vs6wh") pod "ad49d26f-ba62-4191-bb6c-1fa3a56401cb" (UID: "ad49d26f-ba62-4191-bb6c-1fa3a56401cb"). InnerVolumeSpecName "kube-api-access-vs6wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.181127 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs6wh\" (UniqueName: \"kubernetes.io/projected/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-kube-api-access-vs6wh\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.181170 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad49d26f-ba62-4191-bb6c-1fa3a56401cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.186689 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.195205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.198663 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.282078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd75\" (UniqueName: \"kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75\") pod \"3ab61f86-d58e-4874-99f0-bd197d671827\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.282185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts\") pod \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.282962 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" (UID: "4a6ab18e-af1c-44c2-9d84-cb294ed04fdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.283035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts\") pod \"3ab61f86-d58e-4874-99f0-bd197d671827\" (UID: \"3ab61f86-d58e-4874-99f0-bd197d671827\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.283065 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxj4j\" (UniqueName: \"kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j\") pod \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\" (UID: \"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.283449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ab61f86-d58e-4874-99f0-bd197d671827" (UID: "3ab61f86-d58e-4874-99f0-bd197d671827"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.284141 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.284159 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab61f86-d58e-4874-99f0-bd197d671827-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.287817 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j" (OuterVolumeSpecName: "kube-api-access-nxj4j") pod "4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" (UID: "4a6ab18e-af1c-44c2-9d84-cb294ed04fdb"). InnerVolumeSpecName "kube-api-access-nxj4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.287898 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75" (OuterVolumeSpecName: "kube-api-access-kbd75") pod "3ab61f86-d58e-4874-99f0-bd197d671827" (UID: "3ab61f86-d58e-4874-99f0-bd197d671827"). InnerVolumeSpecName "kube-api-access-kbd75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.385708 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxj4j\" (UniqueName: \"kubernetes.io/projected/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb-kube-api-access-nxj4j\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.386051 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbd75\" (UniqueName: \"kubernetes.io/projected/3ab61f86-d58e-4874-99f0-bd197d671827-kube-api-access-kbd75\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.389907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5zwqp" event={"ID":"ad49d26f-ba62-4191-bb6c-1fa3a56401cb","Type":"ContainerDied","Data":"edba6d152c42d9f7a67b44b01ae3a251811f841611e92ebb6b50ac66adf06762"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.389939 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edba6d152c42d9f7a67b44b01ae3a251811f841611e92ebb6b50ac66adf06762" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.389988 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5zwqp" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.394122 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.394130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-hw88l" event={"ID":"4a6ab18e-af1c-44c2-9d84-cb294ed04fdb","Type":"ContainerDied","Data":"fa69586f7069d256f2ba6b7cfc0430a87e31ccc0c782fc916f5fbbd3abd8d1e7"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.394163 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa69586f7069d256f2ba6b7cfc0430a87e31ccc0c782fc916f5fbbd3abd8d1e7" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400686 4805 generic.go:334] "Generic (PLEG): container finished" podID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerID="72eaad4e9a592e4510e72c9c7790a5f8918ca4fc0d2e811b99f50f58e14ef105" exitCode=0 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400713 4805 generic.go:334] "Generic (PLEG): container finished" podID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerID="fbae9e878feb63f82315f0132657cf816b416945cc3d258c27f8087be798bcef" exitCode=0 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400721 4805 generic.go:334] "Generic (PLEG): container finished" podID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerID="4a484a728298a20e1b9848c9f9e613d8f0b2cde3abff5c17f26d492acef20f12" exitCode=0 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerDied","Data":"72eaad4e9a592e4510e72c9c7790a5f8918ca4fc0d2e811b99f50f58e14ef105"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerDied","Data":"fbae9e878feb63f82315f0132657cf816b416945cc3d258c27f8087be798bcef"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.400822 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerDied","Data":"4a484a728298a20e1b9848c9f9e613d8f0b2cde3abff5c17f26d492acef20f12"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.410084 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" event={"ID":"3ab61f86-d58e-4874-99f0-bd197d671827","Type":"ContainerDied","Data":"0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57"} Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.410114 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c82133f773a94f1a14b8824c3e67a74cb41dcdf47efa3de17d542113179da57" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.410165 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-3d7f-account-create-update-hd6xl" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.754782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5-config-sctvr"] Feb 17 00:42:48 crc kubenswrapper[4805]: W0217 00:42:48.756027 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae62f1dd_4aaa_40c5_a682_6064ed39c0e1.slice/crio-2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602 WatchSource:0}: Error finding container 2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602: Status 404 returned error can't find the container with id 2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602 Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.767880 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796398 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r556b\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796591 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796613 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796701 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796748 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.796791 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets\") pod \"2e80aa4a-3260-4111-a066-112ffac85ae7\" (UID: \"2e80aa4a-3260-4111-a066-112ffac85ae7\") " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.806148 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.810749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.810837 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.829907 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.831393 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b" (OuterVolumeSpecName: "kube-api-access-r556b") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "kube-api-access-r556b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.831997 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out" (OuterVolumeSpecName: "config-out") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.834655 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.837153 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config" (OuterVolumeSpecName: "web-config") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.838358 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config" (OuterVolumeSpecName: "config") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.838884 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2e80aa4a-3260-4111-a066-112ffac85ae7" (UID: "2e80aa4a-3260-4111-a066-112ffac85ae7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899067 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899097 4805 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899107 4805 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e80aa4a-3260-4111-a066-112ffac85ae7-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899116 4805 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e80aa4a-3260-4111-a066-112ffac85ae7-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899124 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899133 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899143 4805 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899151 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2e80aa4a-3260-4111-a066-112ffac85ae7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899172 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.899182 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r556b\" (UniqueName: \"kubernetes.io/projected/2e80aa4a-3260-4111-a066-112ffac85ae7-kube-api-access-r556b\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:48 crc kubenswrapper[4805]: I0217 00:42:48.915916 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.000764 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.423041 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2e80aa4a-3260-4111-a066-112ffac85ae7","Type":"ContainerDied","Data":"6f3c35c883a9690f8b216b3c591981d5f8490dd76706bc35d9f97dfce652695a"} Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.423073 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.423272 4805 scope.go:117] "RemoveContainer" containerID="72eaad4e9a592e4510e72c9c7790a5f8918ca4fc0d2e811b99f50f58e14ef105" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.425120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-sctvr" event={"ID":"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1","Type":"ContainerStarted","Data":"fe832a2d02c28d84252e7c1edfde0c46a465cc48d68c8bcae31e0c2c15dbd45d"} Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.425145 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-sctvr" event={"ID":"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1","Type":"ContainerStarted","Data":"2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602"} Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.432473 4805 generic.go:334] "Generic (PLEG): container finished" podID="8150553f-2c0e-4371-9b0d-22364c3c9db4" containerID="8c72fd9c7b7a0399ac2042449a29653aeb38b5fd5438ecea8eac10b1c319dbae" exitCode=0 Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.432520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c298m" event={"ID":"8150553f-2c0e-4371-9b0d-22364c3c9db4","Type":"ContainerDied","Data":"8c72fd9c7b7a0399ac2042449a29653aeb38b5fd5438ecea8eac10b1c319dbae"} Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.453219 4805 scope.go:117] "RemoveContainer" containerID="fbae9e878feb63f82315f0132657cf816b416945cc3d258c27f8087be798bcef" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.454091 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cpgf5-config-sctvr" podStartSLOduration=2.4540710040000002 podStartE2EDuration="2.454071004s" podCreationTimestamp="2026-02-17 00:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:42:49.451082101 +0000 UTC m=+1195.466891499" watchObservedRunningTime="2026-02-17 00:42:49.454071004 +0000 UTC m=+1195.469880402" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.483616 4805 scope.go:117] "RemoveContainer" containerID="4a484a728298a20e1b9848c9f9e613d8f0b2cde3abff5c17f26d492acef20f12" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.502562 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.506523 4805 scope.go:117] "RemoveContainer" containerID="b2ec6aba0a414f7c3f330c820f068c7222c2a2073eb826738cfea615cea07ffd" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.510212 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.562052 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.563537 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="thanos-sidecar" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.563687 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="thanos-sidecar" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.563839 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="init-config-reloader" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.563975 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="init-config-reloader" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.564236 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="config-reloader" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.564327 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="config-reloader" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.567602 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad49d26f-ba62-4191-bb6c-1fa3a56401cb" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.567703 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad49d26f-ba62-4191-bb6c-1fa3a56401cb" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.567799 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" containerName="mariadb-database-create" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.567882 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" containerName="mariadb-database-create" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.568043 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="prometheus" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.568162 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="prometheus" Feb 17 00:42:49 crc kubenswrapper[4805]: E0217 00:42:49.568658 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab61f86-d58e-4874-99f0-bd197d671827" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.568740 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab61f86-d58e-4874-99f0-bd197d671827" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.571485 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad49d26f-ba62-4191-bb6c-1fa3a56401cb" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.571720 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="thanos-sidecar" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.571824 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="prometheus" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.571903 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab61f86-d58e-4874-99f0-bd197d671827" containerName="mariadb-account-create-update" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.571970 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" containerName="config-reloader" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.572041 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" containerName="mariadb-database-create" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.575026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.576989 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.577817 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578135 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578150 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578417 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9ttfp" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578588 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.578752 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.590295 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.595740 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.608971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslcv\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-kube-api-access-tslcv\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609284 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609466 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec567d49-235c-4e83-8b76-c5df4e187fc0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.609648 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.711912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec567d49-235c-4e83-8b76-c5df4e187fc0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tslcv\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-kube-api-access-tslcv\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.712893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.713003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.713091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.713195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.713293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.715681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.715884 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.717693 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.718671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/ec567d49-235c-4e83-8b76-c5df4e187fc0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.722109 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.722746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.723243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.723534 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.724696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.726580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec567d49-235c-4e83-8b76-c5df4e187fc0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.728792 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-config\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.733668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/ec567d49-235c-4e83-8b76-c5df4e187fc0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.733898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tslcv\" (UniqueName: \"kubernetes.io/projected/ec567d49-235c-4e83-8b76-c5df4e187fc0-kube-api-access-tslcv\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.789289 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5zwqp"] Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.791407 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"prometheus-metric-storage-0\" (UID: \"ec567d49-235c-4e83-8b76-c5df4e187fc0\") " pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.804310 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5zwqp"] Feb 17 00:42:49 crc kubenswrapper[4805]: I0217 00:42:49.962150 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.443637 4805 generic.go:334] "Generic (PLEG): container finished" podID="ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" containerID="fe832a2d02c28d84252e7c1edfde0c46a465cc48d68c8bcae31e0c2c15dbd45d" exitCode=0 Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.443746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-sctvr" event={"ID":"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1","Type":"ContainerDied","Data":"fe832a2d02c28d84252e7c1edfde0c46a465cc48d68c8bcae31e0c2c15dbd45d"} Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.513013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.797169 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e80aa4a-3260-4111-a066-112ffac85ae7" path="/var/lib/kubelet/pods/2e80aa4a-3260-4111-a066-112ffac85ae7/volumes" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.798366 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad49d26f-ba62-4191-bb6c-1fa3a56401cb" path="/var/lib/kubelet/pods/ad49d26f-ba62-4191-bb6c-1fa3a56401cb/volumes" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.831830 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948385 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948476 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948674 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.948792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dhvk\" (UniqueName: \"kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk\") pod \"8150553f-2c0e-4371-9b0d-22364c3c9db4\" (UID: \"8150553f-2c0e-4371-9b0d-22364c3c9db4\") " Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.949119 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.949323 4805 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.949858 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.952781 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk" (OuterVolumeSpecName: "kube-api-access-8dhvk") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "kube-api-access-8dhvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.955529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.971081 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts" (OuterVolumeSpecName: "scripts") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.978101 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:50 crc kubenswrapper[4805]: I0217 00:42:50.980775 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8150553f-2c0e-4371-9b0d-22364c3c9db4" (UID: "8150553f-2c0e-4371-9b0d-22364c3c9db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050479 4805 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050524 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050541 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8150553f-2c0e-4371-9b0d-22364c3c9db4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050554 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dhvk\" (UniqueName: \"kubernetes.io/projected/8150553f-2c0e-4371-9b0d-22364c3c9db4-kube-api-access-8dhvk\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050567 4805 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8150553f-2c0e-4371-9b0d-22364c3c9db4-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.050579 4805 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8150553f-2c0e-4371-9b0d-22364c3c9db4-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.454873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c298m" event={"ID":"8150553f-2c0e-4371-9b0d-22364c3c9db4","Type":"ContainerDied","Data":"df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d"} Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.454941 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0a9700ece7330f4404622aed46b034b327708672bc1e213a459cda6853003d" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.454894 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c298m" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.462008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerStarted","Data":"20253a1afad7208330eab436e044c5f0589829b4a899d4a91897d5f184ec9594"} Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.832142 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866444 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866560 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r9tc\" (UniqueName: \"kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866840 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run\") pod \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\" (UID: \"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1\") " Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.866944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.867052 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run" (OuterVolumeSpecName: "var-run") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.867747 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.867764 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.867775 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.868170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.869299 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts" (OuterVolumeSpecName: "scripts") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.872932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc" (OuterVolumeSpecName: "kube-api-access-4r9tc") pod "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" (UID: "ae62f1dd-4aaa-40c5-a682-6064ed39c0e1"). InnerVolumeSpecName "kube-api-access-4r9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.969210 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r9tc\" (UniqueName: \"kubernetes.io/projected/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-kube-api-access-4r9tc\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.969242 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:51 crc kubenswrapper[4805]: I0217 00:42:51.969264 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.276527 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cpgf5" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.473086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-sctvr" event={"ID":"ae62f1dd-4aaa-40c5-a682-6064ed39c0e1","Type":"ContainerDied","Data":"2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602"} Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.473129 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0d79e7047bf9cb7b04c08239235e849a69801d35cfbe2dccb4e730ab37c602" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.473148 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-sctvr" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.537667 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cpgf5-config-sctvr"] Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.545429 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cpgf5-config-sctvr"] Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.679254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cpgf5-config-bbcns"] Feb 17 00:42:52 crc kubenswrapper[4805]: E0217 00:42:52.679722 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8150553f-2c0e-4371-9b0d-22364c3c9db4" containerName="swift-ring-rebalance" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.679741 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8150553f-2c0e-4371-9b0d-22364c3c9db4" containerName="swift-ring-rebalance" Feb 17 00:42:52 crc kubenswrapper[4805]: E0217 00:42:52.679755 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" containerName="ovn-config" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.679761 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" containerName="ovn-config" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.679940 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" containerName="ovn-config" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.679953 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8150553f-2c0e-4371-9b0d-22364c3c9db4" containerName="swift-ring-rebalance" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.680538 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.684839 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.688958 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5-config-bbcns"] Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.781967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.782016 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plgkp\" (UniqueName: \"kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.782038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.782371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.782503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.782602 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.797135 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae62f1dd-4aaa-40c5-a682-6064ed39c0e1" path="/var/lib/kubelet/pods/ae62f1dd-4aaa-40c5-a682-6064ed39c0e1/volumes" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884705 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884729 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plgkp\" (UniqueName: \"kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.884903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.885648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.886245 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.886552 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.887118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.887874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:52 crc kubenswrapper[4805]: I0217 00:42:52.906694 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plgkp\" (UniqueName: \"kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp\") pod \"ovn-controller-cpgf5-config-bbcns\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.012712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.474457 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dg7sz"] Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.475904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.478493 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.483690 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dg7sz"] Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.496355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfj72\" (UniqueName: \"kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.496427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.503223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerStarted","Data":"56ad47a34e97da1bfe33b4968df903d50c704a4c0c25ee4045f6075405956576"} Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.598018 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfj72\" (UniqueName: \"kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.598076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.598640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.620267 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfj72\" (UniqueName: \"kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72\") pod \"root-account-create-update-dg7sz\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:53 crc kubenswrapper[4805]: I0217 00:42:53.815381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dg7sz" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.247888 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.250500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.258307 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.264830 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.317460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.317937 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvx8l\" (UniqueName: \"kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.318031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.418977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.419080 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvx8l\" (UniqueName: \"kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.419099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.424376 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.436889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.438035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvx8l\" (UniqueName: \"kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l\") pod \"mysqld-exporter-0\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " pod="openstack/mysqld-exporter-0" Feb 17 00:42:54 crc kubenswrapper[4805]: I0217 00:42:54.582460 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:42:56 crc kubenswrapper[4805]: I0217 00:42:56.765257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:56 crc kubenswrapper[4805]: I0217 00:42:56.773581 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/de228348-37d1-4ec0-9a47-11f4d895e6d6-etc-swift\") pod \"swift-storage-0\" (UID: \"de228348-37d1-4ec0-9a47-11f4d895e6d6\") " pod="openstack/swift-storage-0" Feb 17 00:42:57 crc kubenswrapper[4805]: I0217 00:42:57.003317 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.080158 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.473354 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-nqsq7"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.474696 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.485000 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-nqsq7"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.562031 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8e7a-account-create-update-92gnd"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.563266 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.569287 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.572662 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8e7a-account-create-update-92gnd"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.595739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfddx\" (UniqueName: \"kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.595881 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.681459 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.697595 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.697876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfddx\" (UniqueName: \"kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.697917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4z2\" (UniqueName: \"kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.697972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.698210 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.727457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfddx\" (UniqueName: \"kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx\") pod \"heat-db-create-nqsq7\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.741601 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fp7zz"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.742914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.755528 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fp7zz"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.799243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4z2\" (UniqueName: \"kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.799318 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.801508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.813945 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nqsq7" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.825205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4z2\" (UniqueName: \"kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2\") pod \"heat-8e7a-account-create-update-92gnd\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.846933 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-45ed-account-create-update-dndhk"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.848182 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.850460 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.883075 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.893662 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d6ckd"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.903123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.903240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qjn\" (UniqueName: \"kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.907400 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.943404 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d6ckd"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.975421 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45ed-account-create-update-dndhk"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.991555 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-d6gtj"] Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.992766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.994452 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.995026 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.995503 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.995669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xd9kt" Feb 17 00:42:58 crc kubenswrapper[4805]: I0217 00:42:58.999618 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-wxcc6"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.001688 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006156 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnq7x\" (UniqueName: \"kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lwmq\" (UniqueName: \"kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.006391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78qjn\" (UniqueName: \"kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.008683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.008734 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d6gtj"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.026799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78qjn\" (UniqueName: \"kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn\") pod \"cinder-db-create-fp7zz\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.035446 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wxcc6"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.043773 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0869-account-create-update-dwqwt"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.045172 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.049825 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.055158 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0869-account-create-update-dwqwt"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.084581 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-750c-account-create-update-n6gdl"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.086004 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.086315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fp7zz" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.088265 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.095835 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-750c-account-create-update-n6gdl"] Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lwmq\" (UniqueName: \"kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110392 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110469 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qttnf\" (UniqueName: \"kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xhj\" (UniqueName: \"kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.110588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnq7x\" (UniqueName: \"kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.111529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.111561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.130972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lwmq\" (UniqueName: \"kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq\") pod \"barbican-db-create-d6ckd\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " pod="openstack/barbican-db-create-d6ckd" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.144227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnq7x\" (UniqueName: \"kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x\") pod \"cinder-45ed-account-create-update-dndhk\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.204972 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ppq\" (UniqueName: \"kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212303 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xhj\" (UniqueName: \"kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q957v\" (UniqueName: \"kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212500 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttnf\" (UniqueName: \"kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.212574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.213720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.215865 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.216585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.228160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttnf\" (UniqueName: \"kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf\") pod \"neutron-db-create-wxcc6\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " pod="openstack/neutron-db-create-wxcc6" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.229049 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xhj\" (UniqueName: \"kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj\") pod \"keystone-db-sync-d6gtj\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:42:59 crc kubenswrapper[4805]: I0217 00:42:59.241789 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6ckd" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.448188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q957v\" (UniqueName: \"kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.448252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.448338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ppq\" (UniqueName: \"kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.448410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.449504 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.451625 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wxcc6" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.458548 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.510782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.540004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q957v\" (UniqueName: \"kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v\") pod \"neutron-750c-account-create-update-n6gdl\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.604626 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.656473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ppq\" (UniqueName: \"kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq\") pod \"barbican-0869-account-create-update-dwqwt\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:00 crc kubenswrapper[4805]: I0217 00:43:00.877390 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:02 crc kubenswrapper[4805]: I0217 00:43:02.558683 4805 generic.go:334] "Generic (PLEG): container finished" podID="ec567d49-235c-4e83-8b76-c5df4e187fc0" containerID="56ad47a34e97da1bfe33b4968df903d50c704a4c0c25ee4045f6075405956576" exitCode=0 Feb 17 00:43:02 crc kubenswrapper[4805]: I0217 00:43:02.558960 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerDied","Data":"56ad47a34e97da1bfe33b4968df903d50c704a4c0c25ee4045f6075405956576"} Feb 17 00:43:04 crc kubenswrapper[4805]: E0217 00:43:04.514606 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 17 00:43:04 crc kubenswrapper[4805]: E0217 00:43:04.515512 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4cf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-j7v5m_openstack(38464d88-9f3b-485b-872a-98ed2ea8e3be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:43:04 crc kubenswrapper[4805]: E0217 00:43:04.517382 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-j7v5m" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" Feb 17 00:43:04 crc kubenswrapper[4805]: E0217 00:43:04.592162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-j7v5m" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.610723 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerStarted","Data":"2fd53f98afacdd86235ac70ffc13b4678c34a2ac7c15ff5a3d13bb2a2c9a0f88"} Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.615192 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-750c-account-create-update-n6gdl"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.626433 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fp7zz"] Feb 17 00:43:05 crc kubenswrapper[4805]: W0217 00:43:05.631507 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90e7d39_95ba_4b97_ae51_1292c4c235cb.slice/crio-254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58 WatchSource:0}: Error finding container 254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58: Status 404 returned error can't find the container with id 254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58 Feb 17 00:43:05 crc kubenswrapper[4805]: W0217 00:43:05.632949 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfde176ec_50b1_4a8a_8b8d_a652fc977aa5.slice/crio-b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6 WatchSource:0}: Error finding container b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6: Status 404 returned error can't find the container with id b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6 Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.649087 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.649734 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dg7sz"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.658877 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-45ed-account-create-update-dndhk"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.807730 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d6ckd"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.816609 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.820470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-d6gtj"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.830768 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cpgf5-config-bbcns"] Feb 17 00:43:05 crc kubenswrapper[4805]: I0217 00:43:05.921539 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 00:43:05 crc kubenswrapper[4805]: W0217 00:43:05.941570 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde228348_37d1_4ec0_9a47_11f4d895e6d6.slice/crio-2cb1576e380cbd0cc5606ae5fc71201be9fc5c1a2053cb26afe5cebc0231762a WatchSource:0}: Error finding container 2cb1576e380cbd0cc5606ae5fc71201be9fc5c1a2053cb26afe5cebc0231762a: Status 404 returned error can't find the container with id 2cb1576e380cbd0cc5606ae5fc71201be9fc5c1a2053cb26afe5cebc0231762a Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.198943 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-wxcc6"] Feb 17 00:43:06 crc kubenswrapper[4805]: W0217 00:43:06.203762 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd7278c_a746_4195_9d5e_035f100862db.slice/crio-7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63 WatchSource:0}: Error finding container 7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63: Status 404 returned error can't find the container with id 7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63 Feb 17 00:43:06 crc kubenswrapper[4805]: W0217 00:43:06.223059 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1b1ba7_b284_4f58_baff_840133925a82.slice/crio-4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20 WatchSource:0}: Error finding container 4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20: Status 404 returned error can't find the container with id 4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20 Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.235081 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-nqsq7"] Feb 17 00:43:06 crc kubenswrapper[4805]: W0217 00:43:06.235994 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7e01a2e_86e0_449a_96d8_37396b137271.slice/crio-f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6 WatchSource:0}: Error finding container f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6: Status 404 returned error can't find the container with id f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6 Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.254966 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.266611 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0869-account-create-update-dwqwt"] Feb 17 00:43:06 crc kubenswrapper[4805]: W0217 00:43:06.269663 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b81ed6f_cf86_4fc8_89e0_6cb03f628e0c.slice/crio-fe3f306321f49d570a41d74679604d49c2b9e621cc545ea150d66223ae8ad0f7 WatchSource:0}: Error finding container fe3f306321f49d570a41d74679604d49c2b9e621cc545ea150d66223ae8ad0f7: Status 404 returned error can't find the container with id fe3f306321f49d570a41d74679604d49c2b9e621cc545ea150d66223ae8ad0f7 Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.275474 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8e7a-account-create-update-92gnd"] Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.620448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"2cb1576e380cbd0cc5606ae5fc71201be9fc5c1a2053cb26afe5cebc0231762a"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.622298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fp7zz" event={"ID":"fde176ec-50b1-4a8a-8b8d-a652fc977aa5","Type":"ContainerStarted","Data":"8c0eb357f3d63907d5c804547af52b884c30783735ab631981ffec900d1f59c9"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.622337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fp7zz" event={"ID":"fde176ec-50b1-4a8a-8b8d-a652fc977aa5","Type":"ContainerStarted","Data":"b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.624944 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0869-account-create-update-dwqwt" event={"ID":"b7e01a2e-86e0-449a-96d8-37396b137271","Type":"ContainerStarted","Data":"cb346269dd69d1d6bc92c676a23be687da8394fa4b800aa695a6b990aedcb1fa"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.625044 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0869-account-create-update-dwqwt" event={"ID":"b7e01a2e-86e0-449a-96d8-37396b137271","Type":"ContainerStarted","Data":"f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.626907 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45ed-account-create-update-dndhk" event={"ID":"ce100ad8-844c-4b1d-8c16-6acce86b75d2","Type":"ContainerStarted","Data":"a6f7a3f060d7f022b4ea6e2832811cf1a102d5d7ce1bb1696396084c77178a15"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.627002 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45ed-account-create-update-dndhk" event={"ID":"ce100ad8-844c-4b1d-8c16-6acce86b75d2","Type":"ContainerStarted","Data":"7f34c75f373caded00945fe0b43d51182327ff7c827d551ee15d2a965862c452"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.628896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6ckd" event={"ID":"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c","Type":"ContainerStarted","Data":"b0b07e59cd8e57e5153ef49f88b1206ca554ca102a92edddef8ac03d861d0374"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.628935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6ckd" event={"ID":"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c","Type":"ContainerStarted","Data":"42aff4de2e1eb2b889f43c03fc36c86d9e1aae921e0007119e1388797b9224df"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.630147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c","Type":"ContainerStarted","Data":"fe3f306321f49d570a41d74679604d49c2b9e621cc545ea150d66223ae8ad0f7"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.631170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-750c-account-create-update-n6gdl" event={"ID":"0686c805-0a62-46a4-ae40-f3831191c403","Type":"ContainerStarted","Data":"fe44a1bb50097d121e463e8be014f953906481706b8cdf598739890a12af7cbe"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.631191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-750c-account-create-update-n6gdl" event={"ID":"0686c805-0a62-46a4-ae40-f3831191c403","Type":"ContainerStarted","Data":"a7ec3bda7d8e5295fc93b94b42da14ae0629f6c95d4cd8f7679dfe2646eac0a4"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.632816 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nqsq7" event={"ID":"ca1b1ba7-b284-4f58-baff-840133925a82","Type":"ContainerStarted","Data":"52ac472b28927effa17b5fea79bf9b8c8bcda95ef87a093a7fc3b3584c6ecdd8"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.632839 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nqsq7" event={"ID":"ca1b1ba7-b284-4f58-baff-840133925a82","Type":"ContainerStarted","Data":"4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.634142 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8e7a-account-create-update-92gnd" event={"ID":"d2f2fd03-808b-40ca-bea0-ac46f4f8770d","Type":"ContainerStarted","Data":"1aa5f947a47892ee328e68ce29c4eb3d5620d3fbcab7206b52ebc47e9958b0f6"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.634164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8e7a-account-create-update-92gnd" event={"ID":"d2f2fd03-808b-40ca-bea0-ac46f4f8770d","Type":"ContainerStarted","Data":"ff5908a8ff0658176a981c0731b74c72a9bc6846e640dd41301e754eaf28efd8"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.640651 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wxcc6" event={"ID":"5cd7278c-a746-4195-9d5e-035f100862db","Type":"ContainerStarted","Data":"15543308b45f895f0751984f2868f9e2b06966082c85e44f88df5f3a1caf1251"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.640692 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wxcc6" event={"ID":"5cd7278c-a746-4195-9d5e-035f100862db","Type":"ContainerStarted","Data":"7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.641299 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-fp7zz" podStartSLOduration=8.641286522 podStartE2EDuration="8.641286522s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.638666589 +0000 UTC m=+1212.654475987" watchObservedRunningTime="2026-02-17 00:43:06.641286522 +0000 UTC m=+1212.657095920" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.642689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dg7sz" event={"ID":"d90e7d39-95ba-4b97-ae51-1292c4c235cb","Type":"ContainerStarted","Data":"66da351eff6651b8820d8d164de99f239e76e6f6f571c6175de14c07eafa1e3f"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.642714 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dg7sz" event={"ID":"d90e7d39-95ba-4b97-ae51-1292c4c235cb","Type":"ContainerStarted","Data":"254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.644625 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab0a726b-21c1-4358-8a0e-a4d3af1222e0" containerID="c49e505056e2634c85d207535b633323c26c243219beafef205c79ae22b1e532" exitCode=0 Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.644741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-bbcns" event={"ID":"ab0a726b-21c1-4358-8a0e-a4d3af1222e0","Type":"ContainerDied","Data":"c49e505056e2634c85d207535b633323c26c243219beafef205c79ae22b1e532"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.644766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-bbcns" event={"ID":"ab0a726b-21c1-4358-8a0e-a4d3af1222e0","Type":"ContainerStarted","Data":"df51d2f6e2ab3ea891d55aa18d9fec4557e43b9c13ac840d357c135e0841044f"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.648525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d6gtj" event={"ID":"ca71e40f-60ca-4021-974f-0057bf0963cf","Type":"ContainerStarted","Data":"345b5a81a3fbf1e362120f462095f3cac45ded1659f34bcb087b11e55bf34f7d"} Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.654778 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-750c-account-create-update-n6gdl" podStartSLOduration=7.654763706 podStartE2EDuration="7.654763706s" podCreationTimestamp="2026-02-17 00:42:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.651773863 +0000 UTC m=+1212.667583261" watchObservedRunningTime="2026-02-17 00:43:06.654763706 +0000 UTC m=+1212.670573104" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.676175 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-nqsq7" podStartSLOduration=8.67615774 podStartE2EDuration="8.67615774s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.666682027 +0000 UTC m=+1212.682491425" watchObservedRunningTime="2026-02-17 00:43:06.67615774 +0000 UTC m=+1212.691967138" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.682818 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-d6ckd" podStartSLOduration=8.682803165 podStartE2EDuration="8.682803165s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.67795967 +0000 UTC m=+1212.693769078" watchObservedRunningTime="2026-02-17 00:43:06.682803165 +0000 UTC m=+1212.698612563" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.696941 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-8e7a-account-create-update-92gnd" podStartSLOduration=8.696923587 podStartE2EDuration="8.696923587s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.690059416 +0000 UTC m=+1212.705868814" watchObservedRunningTime="2026-02-17 00:43:06.696923587 +0000 UTC m=+1212.712732985" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.709677 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-45ed-account-create-update-dndhk" podStartSLOduration=8.70964321 podStartE2EDuration="8.70964321s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.708151988 +0000 UTC m=+1212.723961386" watchObservedRunningTime="2026-02-17 00:43:06.70964321 +0000 UTC m=+1212.725452608" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.721203 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-0869-account-create-update-dwqwt" podStartSLOduration=8.72118594 podStartE2EDuration="8.72118594s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.71901009 +0000 UTC m=+1212.734819488" watchObservedRunningTime="2026-02-17 00:43:06.72118594 +0000 UTC m=+1212.736995338" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.738534 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-wxcc6" podStartSLOduration=8.738518721 podStartE2EDuration="8.738518721s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.735265001 +0000 UTC m=+1212.751074399" watchObservedRunningTime="2026-02-17 00:43:06.738518721 +0000 UTC m=+1212.754328119" Feb 17 00:43:06 crc kubenswrapper[4805]: I0217 00:43:06.774830 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dg7sz" podStartSLOduration=13.774807209 podStartE2EDuration="13.774807209s" podCreationTimestamp="2026-02-17 00:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:06.75683903 +0000 UTC m=+1212.772648428" watchObservedRunningTime="2026-02-17 00:43:06.774807209 +0000 UTC m=+1212.790616627" Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.662406 4805 generic.go:334] "Generic (PLEG): container finished" podID="466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" containerID="b0b07e59cd8e57e5153ef49f88b1206ca554ca102a92edddef8ac03d861d0374" exitCode=0 Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.662474 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6ckd" event={"ID":"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c","Type":"ContainerDied","Data":"b0b07e59cd8e57e5153ef49f88b1206ca554ca102a92edddef8ac03d861d0374"} Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.668024 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca1b1ba7-b284-4f58-baff-840133925a82" containerID="52ac472b28927effa17b5fea79bf9b8c8bcda95ef87a093a7fc3b3584c6ecdd8" exitCode=0 Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.668384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nqsq7" event={"ID":"ca1b1ba7-b284-4f58-baff-840133925a82","Type":"ContainerDied","Data":"52ac472b28927effa17b5fea79bf9b8c8bcda95ef87a093a7fc3b3584c6ecdd8"} Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.670572 4805 generic.go:334] "Generic (PLEG): container finished" podID="fde176ec-50b1-4a8a-8b8d-a652fc977aa5" containerID="8c0eb357f3d63907d5c804547af52b884c30783735ab631981ffec900d1f59c9" exitCode=0 Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.670694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fp7zz" event={"ID":"fde176ec-50b1-4a8a-8b8d-a652fc977aa5","Type":"ContainerDied","Data":"8c0eb357f3d63907d5c804547af52b884c30783735ab631981ffec900d1f59c9"} Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.676742 4805 generic.go:334] "Generic (PLEG): container finished" podID="5cd7278c-a746-4195-9d5e-035f100862db" containerID="15543308b45f895f0751984f2868f9e2b06966082c85e44f88df5f3a1caf1251" exitCode=0 Feb 17 00:43:07 crc kubenswrapper[4805]: I0217 00:43:07.676863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wxcc6" event={"ID":"5cd7278c-a746-4195-9d5e-035f100862db","Type":"ContainerDied","Data":"15543308b45f895f0751984f2868f9e2b06966082c85e44f88df5f3a1caf1251"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.363181 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.519635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520126 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520181 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.519951 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520338 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520429 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plgkp\" (UniqueName: \"kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp\") pod \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\" (UID: \"ab0a726b-21c1-4358-8a0e-a4d3af1222e0\") " Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520912 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run" (OuterVolumeSpecName: "var-run") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.520916 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.521513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts" (OuterVolumeSpecName: "scripts") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.526395 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp" (OuterVolumeSpecName: "kube-api-access-plgkp") pod "ab0a726b-21c1-4358-8a0e-a4d3af1222e0" (UID: "ab0a726b-21c1-4358-8a0e-a4d3af1222e0"). InnerVolumeSpecName "kube-api-access-plgkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.623956 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.624001 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.624019 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plgkp\" (UniqueName: \"kubernetes.io/projected/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-kube-api-access-plgkp\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.624035 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.624051 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab0a726b-21c1-4358-8a0e-a4d3af1222e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.689654 4805 generic.go:334] "Generic (PLEG): container finished" podID="d2f2fd03-808b-40ca-bea0-ac46f4f8770d" containerID="1aa5f947a47892ee328e68ce29c4eb3d5620d3fbcab7206b52ebc47e9958b0f6" exitCode=0 Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.689720 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8e7a-account-create-update-92gnd" event={"ID":"d2f2fd03-808b-40ca-bea0-ac46f4f8770d","Type":"ContainerDied","Data":"1aa5f947a47892ee328e68ce29c4eb3d5620d3fbcab7206b52ebc47e9958b0f6"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.693457 4805 generic.go:334] "Generic (PLEG): container finished" podID="d90e7d39-95ba-4b97-ae51-1292c4c235cb" containerID="66da351eff6651b8820d8d164de99f239e76e6f6f571c6175de14c07eafa1e3f" exitCode=0 Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.693548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dg7sz" event={"ID":"d90e7d39-95ba-4b97-ae51-1292c4c235cb","Type":"ContainerDied","Data":"66da351eff6651b8820d8d164de99f239e76e6f6f571c6175de14c07eafa1e3f"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.696446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cpgf5-config-bbcns" event={"ID":"ab0a726b-21c1-4358-8a0e-a4d3af1222e0","Type":"ContainerDied","Data":"df51d2f6e2ab3ea891d55aa18d9fec4557e43b9c13ac840d357c135e0841044f"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.696585 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df51d2f6e2ab3ea891d55aa18d9fec4557e43b9c13ac840d357c135e0841044f" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.696500 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cpgf5-config-bbcns" Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.712119 4805 generic.go:334] "Generic (PLEG): container finished" podID="ce100ad8-844c-4b1d-8c16-6acce86b75d2" containerID="a6f7a3f060d7f022b4ea6e2832811cf1a102d5d7ce1bb1696396084c77178a15" exitCode=0 Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.712246 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45ed-account-create-update-dndhk" event={"ID":"ce100ad8-844c-4b1d-8c16-6acce86b75d2","Type":"ContainerDied","Data":"a6f7a3f060d7f022b4ea6e2832811cf1a102d5d7ce1bb1696396084c77178a15"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.716234 4805 generic.go:334] "Generic (PLEG): container finished" podID="0686c805-0a62-46a4-ae40-f3831191c403" containerID="fe44a1bb50097d121e463e8be014f953906481706b8cdf598739890a12af7cbe" exitCode=0 Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.716302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-750c-account-create-update-n6gdl" event={"ID":"0686c805-0a62-46a4-ae40-f3831191c403","Type":"ContainerDied","Data":"fe44a1bb50097d121e463e8be014f953906481706b8cdf598739890a12af7cbe"} Feb 17 00:43:08 crc kubenswrapper[4805]: I0217 00:43:08.719481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerStarted","Data":"4bacfdd034265becbfc41465b32471309a4ff420b43b4bdb92b327c7fc0c919c"} Feb 17 00:43:09 crc kubenswrapper[4805]: I0217 00:43:09.459133 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cpgf5-config-bbcns"] Feb 17 00:43:09 crc kubenswrapper[4805]: I0217 00:43:09.468542 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cpgf5-config-bbcns"] Feb 17 00:43:10 crc kubenswrapper[4805]: I0217 00:43:10.170825 4805 generic.go:334] "Generic (PLEG): container finished" podID="b7e01a2e-86e0-449a-96d8-37396b137271" containerID="cb346269dd69d1d6bc92c676a23be687da8394fa4b800aa695a6b990aedcb1fa" exitCode=0 Feb 17 00:43:10 crc kubenswrapper[4805]: I0217 00:43:10.170927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0869-account-create-update-dwqwt" event={"ID":"b7e01a2e-86e0-449a-96d8-37396b137271","Type":"ContainerDied","Data":"cb346269dd69d1d6bc92c676a23be687da8394fa4b800aa695a6b990aedcb1fa"} Feb 17 00:43:10 crc kubenswrapper[4805]: I0217 00:43:10.795200 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab0a726b-21c1-4358-8a0e-a4d3af1222e0" path="/var/lib/kubelet/pods/ab0a726b-21c1-4358-8a0e-a4d3af1222e0/volumes" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.849697 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wxcc6" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.858410 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fp7zz" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.864561 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.900428 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dg7sz" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.901548 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.903470 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.910840 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.926931 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6ckd" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.937243 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nqsq7" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.986918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts\") pod \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.987178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts\") pod \"5cd7278c-a746-4195-9d5e-035f100862db\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.987229 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qttnf\" (UniqueName: \"kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf\") pod \"5cd7278c-a746-4195-9d5e-035f100862db\" (UID: \"5cd7278c-a746-4195-9d5e-035f100862db\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.987407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts\") pod \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.987482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78qjn\" (UniqueName: \"kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn\") pod \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\" (UID: \"fde176ec-50b1-4a8a-8b8d-a652fc977aa5\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.987534 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnq7x\" (UniqueName: \"kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x\") pod \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\" (UID: \"ce100ad8-844c-4b1d-8c16-6acce86b75d2\") " Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.988176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce100ad8-844c-4b1d-8c16-6acce86b75d2" (UID: "ce100ad8-844c-4b1d-8c16-6acce86b75d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.988268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fde176ec-50b1-4a8a-8b8d-a652fc977aa5" (UID: "fde176ec-50b1-4a8a-8b8d-a652fc977aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.988539 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5cd7278c-a746-4195-9d5e-035f100862db" (UID: "5cd7278c-a746-4195-9d5e-035f100862db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.993639 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf" (OuterVolumeSpecName: "kube-api-access-qttnf") pod "5cd7278c-a746-4195-9d5e-035f100862db" (UID: "5cd7278c-a746-4195-9d5e-035f100862db"). InnerVolumeSpecName "kube-api-access-qttnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:11 crc kubenswrapper[4805]: I0217 00:43:11.997835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn" (OuterVolumeSpecName: "kube-api-access-78qjn") pod "fde176ec-50b1-4a8a-8b8d-a652fc977aa5" (UID: "fde176ec-50b1-4a8a-8b8d-a652fc977aa5"). InnerVolumeSpecName "kube-api-access-78qjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.019995 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x" (OuterVolumeSpecName: "kube-api-access-fnq7x") pod "ce100ad8-844c-4b1d-8c16-6acce86b75d2" (UID: "ce100ad8-844c-4b1d-8c16-6acce86b75d2"). InnerVolumeSpecName "kube-api-access-fnq7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfddx\" (UniqueName: \"kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx\") pod \"ca1b1ba7-b284-4f58-baff-840133925a82\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts\") pod \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts\") pod \"ca1b1ba7-b284-4f58-baff-840133925a82\" (UID: \"ca1b1ba7-b284-4f58-baff-840133925a82\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089556 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts\") pod \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089610 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts\") pod \"b7e01a2e-86e0-449a-96d8-37396b137271\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q957v\" (UniqueName: \"kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v\") pod \"0686c805-0a62-46a4-ae40-f3831191c403\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts\") pod \"0686c805-0a62-46a4-ae40-f3831191c403\" (UID: \"0686c805-0a62-46a4-ae40-f3831191c403\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089829 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts\") pod \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089864 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lwmq\" (UniqueName: \"kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq\") pod \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\" (UID: \"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089911 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54ppq\" (UniqueName: \"kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq\") pod \"b7e01a2e-86e0-449a-96d8-37396b137271\" (UID: \"b7e01a2e-86e0-449a-96d8-37396b137271\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4z2\" (UniqueName: \"kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2\") pod \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\" (UID: \"d2f2fd03-808b-40ca-bea0-ac46f4f8770d\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.089997 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfj72\" (UniqueName: \"kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72\") pod \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\" (UID: \"d90e7d39-95ba-4b97-ae51-1292c4c235cb\") " Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090109 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca1b1ba7-b284-4f58-baff-840133925a82" (UID: "ca1b1ba7-b284-4f58-baff-840133925a82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90e7d39-95ba-4b97-ae51-1292c4c235cb" (UID: "d90e7d39-95ba-4b97-ae51-1292c4c235cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7e01a2e-86e0-449a-96d8-37396b137271" (UID: "b7e01a2e-86e0-449a-96d8-37396b137271"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2f2fd03-808b-40ca-bea0-ac46f4f8770d" (UID: "d2f2fd03-808b-40ca-bea0-ac46f4f8770d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0686c805-0a62-46a4-ae40-f3831191c403" (UID: "0686c805-0a62-46a4-ae40-f3831191c403"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090979 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.090999 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca1b1ba7-b284-4f58-baff-840133925a82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091009 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90e7d39-95ba-4b97-ae51-1292c4c235cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091018 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78qjn\" (UniqueName: \"kubernetes.io/projected/fde176ec-50b1-4a8a-8b8d-a652fc977aa5-kube-api-access-78qjn\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091043 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnq7x\" (UniqueName: \"kubernetes.io/projected/ce100ad8-844c-4b1d-8c16-6acce86b75d2-kube-api-access-fnq7x\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" (UID: "466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091110 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7e01a2e-86e0-449a-96d8-37396b137271-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091122 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0686c805-0a62-46a4-ae40-f3831191c403-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091130 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce100ad8-844c-4b1d-8c16-6acce86b75d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091138 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091148 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cd7278c-a746-4195-9d5e-035f100862db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.091157 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qttnf\" (UniqueName: \"kubernetes.io/projected/5cd7278c-a746-4195-9d5e-035f100862db-kube-api-access-qttnf\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.093825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx" (OuterVolumeSpecName: "kube-api-access-nfddx") pod "ca1b1ba7-b284-4f58-baff-840133925a82" (UID: "ca1b1ba7-b284-4f58-baff-840133925a82"). InnerVolumeSpecName "kube-api-access-nfddx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.093854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72" (OuterVolumeSpecName: "kube-api-access-nfj72") pod "d90e7d39-95ba-4b97-ae51-1292c4c235cb" (UID: "d90e7d39-95ba-4b97-ae51-1292c4c235cb"). InnerVolumeSpecName "kube-api-access-nfj72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.093917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v" (OuterVolumeSpecName: "kube-api-access-q957v") pod "0686c805-0a62-46a4-ae40-f3831191c403" (UID: "0686c805-0a62-46a4-ae40-f3831191c403"). InnerVolumeSpecName "kube-api-access-q957v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.094230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq" (OuterVolumeSpecName: "kube-api-access-54ppq") pod "b7e01a2e-86e0-449a-96d8-37396b137271" (UID: "b7e01a2e-86e0-449a-96d8-37396b137271"). InnerVolumeSpecName "kube-api-access-54ppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.094281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2" (OuterVolumeSpecName: "kube-api-access-vp4z2") pod "d2f2fd03-808b-40ca-bea0-ac46f4f8770d" (UID: "d2f2fd03-808b-40ca-bea0-ac46f4f8770d"). InnerVolumeSpecName "kube-api-access-vp4z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.095772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq" (OuterVolumeSpecName: "kube-api-access-8lwmq") pod "466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" (UID: "466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c"). InnerVolumeSpecName "kube-api-access-8lwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194303 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q957v\" (UniqueName: \"kubernetes.io/projected/0686c805-0a62-46a4-ae40-f3831191c403-kube-api-access-q957v\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194370 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lwmq\" (UniqueName: \"kubernetes.io/projected/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-kube-api-access-8lwmq\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194384 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54ppq\" (UniqueName: \"kubernetes.io/projected/b7e01a2e-86e0-449a-96d8-37396b137271-kube-api-access-54ppq\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194398 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4z2\" (UniqueName: \"kubernetes.io/projected/d2f2fd03-808b-40ca-bea0-ac46f4f8770d-kube-api-access-vp4z2\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194431 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfj72\" (UniqueName: \"kubernetes.io/projected/d90e7d39-95ba-4b97-ae51-1292c4c235cb-kube-api-access-nfj72\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194445 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.194455 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfddx\" (UniqueName: \"kubernetes.io/projected/ca1b1ba7-b284-4f58-baff-840133925a82-kube-api-access-nfddx\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.202979 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-nqsq7" event={"ID":"ca1b1ba7-b284-4f58-baff-840133925a82","Type":"ContainerDied","Data":"4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.203014 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0b754fb37eea47fa2d376abb37def709bbbb20b3e71452103f5b2233a5cd20" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.203467 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-nqsq7" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.206309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-wxcc6" event={"ID":"5cd7278c-a746-4195-9d5e-035f100862db","Type":"ContainerDied","Data":"7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.206352 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c04c1764d57b79dc17000258726c019e61eba9ad922def3c3e1897abb467f63" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.206365 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-wxcc6" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.220039 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d6ckd" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.220216 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d6ckd" event={"ID":"466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c","Type":"ContainerDied","Data":"42aff4de2e1eb2b889f43c03fc36c86d9e1aae921e0007119e1388797b9224df"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.220294 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42aff4de2e1eb2b889f43c03fc36c86d9e1aae921e0007119e1388797b9224df" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.223031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fp7zz" event={"ID":"fde176ec-50b1-4a8a-8b8d-a652fc977aa5","Type":"ContainerDied","Data":"b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.223133 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5336c82447ecc511d1eb7792be0f65c9989b786fefbec0d75dbcd61a4d931b6" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.223045 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fp7zz" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.229367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0869-account-create-update-dwqwt" event={"ID":"b7e01a2e-86e0-449a-96d8-37396b137271","Type":"ContainerDied","Data":"f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.229417 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70a5c4fd82fff2da7d7aaa1d3066ac8133a2fac8c81dc24417196c1a10cacc6" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.229495 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0869-account-create-update-dwqwt" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.233613 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8e7a-account-create-update-92gnd" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.233642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8e7a-account-create-update-92gnd" event={"ID":"d2f2fd03-808b-40ca-bea0-ac46f4f8770d","Type":"ContainerDied","Data":"ff5908a8ff0658176a981c0731b74c72a9bc6846e640dd41301e754eaf28efd8"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.233819 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5908a8ff0658176a981c0731b74c72a9bc6846e640dd41301e754eaf28efd8" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.235677 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dg7sz" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.235713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dg7sz" event={"ID":"d90e7d39-95ba-4b97-ae51-1292c4c235cb","Type":"ContainerDied","Data":"254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.235757 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254a0dbb223b49bde3d354aa199a7589830177a990b65c942e64af304caa0f58" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.236848 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-45ed-account-create-update-dndhk" event={"ID":"ce100ad8-844c-4b1d-8c16-6acce86b75d2","Type":"ContainerDied","Data":"7f34c75f373caded00945fe0b43d51182327ff7c827d551ee15d2a965862c452"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.236875 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f34c75f373caded00945fe0b43d51182327ff7c827d551ee15d2a965862c452" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.236933 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-45ed-account-create-update-dndhk" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.239875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-750c-account-create-update-n6gdl" event={"ID":"0686c805-0a62-46a4-ae40-f3831191c403","Type":"ContainerDied","Data":"a7ec3bda7d8e5295fc93b94b42da14ae0629f6c95d4cd8f7679dfe2646eac0a4"} Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.239901 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ec3bda7d8e5295fc93b94b42da14ae0629f6c95d4cd8f7679dfe2646eac0a4" Feb 17 00:43:12 crc kubenswrapper[4805]: I0217 00:43:12.239945 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-750c-account-create-update-n6gdl" Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.256865 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"ec567d49-235c-4e83-8b76-c5df4e187fc0","Type":"ContainerStarted","Data":"97749bd2ee06bb9750c63c9fbb7fc1fb8843c540656dd2d3616f2955f237ae8b"} Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.261954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d6gtj" event={"ID":"ca71e40f-60ca-4021-974f-0057bf0963cf","Type":"ContainerStarted","Data":"f1aa371fef229498e2ed4d986a649838b18988ac18758598d43bc5b4bdc06fa8"} Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.273853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c","Type":"ContainerStarted","Data":"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198"} Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.275848 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"b6302a08cb18f8b8c5dda332bb7b1cadbdf68c43013c327ab75c3eccaf16722b"} Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.275870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"df0f3564ebe8a9958357b6dc56ad5cdd35033b1e8be0e6b4bf1f3703d8090e30"} Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.304608 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=24.304580819999998 podStartE2EDuration="24.30458082s" podCreationTimestamp="2026-02-17 00:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:13.298537702 +0000 UTC m=+1219.314347130" watchObservedRunningTime="2026-02-17 00:43:13.30458082 +0000 UTC m=+1219.320390218" Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.331088 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-d6gtj" podStartSLOduration=8.861961338 podStartE2EDuration="15.331063945s" podCreationTimestamp="2026-02-17 00:42:58 +0000 UTC" firstStartedPulling="2026-02-17 00:43:05.816424513 +0000 UTC m=+1211.832233911" lastFinishedPulling="2026-02-17 00:43:12.28552712 +0000 UTC m=+1218.301336518" observedRunningTime="2026-02-17 00:43:13.314747952 +0000 UTC m=+1219.330557390" watchObservedRunningTime="2026-02-17 00:43:13.331063945 +0000 UTC m=+1219.346873373" Feb 17 00:43:13 crc kubenswrapper[4805]: I0217 00:43:13.344863 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=13.083937031 podStartE2EDuration="19.344844828s" podCreationTimestamp="2026-02-17 00:42:54 +0000 UTC" firstStartedPulling="2026-02-17 00:43:06.285051503 +0000 UTC m=+1212.300860901" lastFinishedPulling="2026-02-17 00:43:12.5459593 +0000 UTC m=+1218.561768698" observedRunningTime="2026-02-17 00:43:13.333896184 +0000 UTC m=+1219.349705582" watchObservedRunningTime="2026-02-17 00:43:13.344844828 +0000 UTC m=+1219.360654236" Feb 17 00:43:14 crc kubenswrapper[4805]: I0217 00:43:14.305586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"a766ad8325cae1358f7f202c24fcd05d9ea1aa8054abff3d42caf9b5337eb925"} Feb 17 00:43:14 crc kubenswrapper[4805]: I0217 00:43:14.305635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"0b87859d34d9f929fb81e7f623c7a3a7b07f2c8f31f4521ffa88e6b4485b5cd3"} Feb 17 00:43:14 crc kubenswrapper[4805]: I0217 00:43:14.962271 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 00:43:15 crc kubenswrapper[4805]: I0217 00:43:15.622981 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dg7sz"] Feb 17 00:43:15 crc kubenswrapper[4805]: I0217 00:43:15.630797 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dg7sz"] Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.344526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"b52bc50d317675f3603e63a6a9f48fc45b043279a40a047a67ece189492421f3"} Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.345033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"5cd82107cbc37b856027a577683dd8ecd2a08551dff54c8f94fb2712bd51cf07"} Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.345065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"947fa9eb3416fe77193f4ea95afdcb63c10fc7b967e2c01fe0c2b6c4e1354dfd"} Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.345084 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"3ffe2bb44f4964d44dfd280bab11073cf734d2066731e6762e6bb7432e802483"} Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.347649 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca71e40f-60ca-4021-974f-0057bf0963cf" containerID="f1aa371fef229498e2ed4d986a649838b18988ac18758598d43bc5b4bdc06fa8" exitCode=0 Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.347711 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d6gtj" event={"ID":"ca71e40f-60ca-4021-974f-0057bf0963cf","Type":"ContainerDied","Data":"f1aa371fef229498e2ed4d986a649838b18988ac18758598d43bc5b4bdc06fa8"} Feb 17 00:43:16 crc kubenswrapper[4805]: I0217 00:43:16.795683 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90e7d39-95ba-4b97-ae51-1292c4c235cb" path="/var/lib/kubelet/pods/d90e7d39-95ba-4b97-ae51-1292c4c235cb/volumes" Feb 17 00:43:17 crc kubenswrapper[4805]: I0217 00:43:17.931378 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.120155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle\") pod \"ca71e40f-60ca-4021-974f-0057bf0963cf\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.120489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data\") pod \"ca71e40f-60ca-4021-974f-0057bf0963cf\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.120550 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xhj\" (UniqueName: \"kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj\") pod \"ca71e40f-60ca-4021-974f-0057bf0963cf\" (UID: \"ca71e40f-60ca-4021-974f-0057bf0963cf\") " Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.127474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj" (OuterVolumeSpecName: "kube-api-access-45xhj") pod "ca71e40f-60ca-4021-974f-0057bf0963cf" (UID: "ca71e40f-60ca-4021-974f-0057bf0963cf"). InnerVolumeSpecName "kube-api-access-45xhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.172145 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca71e40f-60ca-4021-974f-0057bf0963cf" (UID: "ca71e40f-60ca-4021-974f-0057bf0963cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.189213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data" (OuterVolumeSpecName: "config-data") pod "ca71e40f-60ca-4021-974f-0057bf0963cf" (UID: "ca71e40f-60ca-4021-974f-0057bf0963cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.223305 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xhj\" (UniqueName: \"kubernetes.io/projected/ca71e40f-60ca-4021-974f-0057bf0963cf-kube-api-access-45xhj\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.223603 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.223746 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca71e40f-60ca-4021-974f-0057bf0963cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.369613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-d6gtj" event={"ID":"ca71e40f-60ca-4021-974f-0057bf0963cf","Type":"ContainerDied","Data":"345b5a81a3fbf1e362120f462095f3cac45ded1659f34bcb087b11e55bf34f7d"} Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.369670 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345b5a81a3fbf1e362120f462095f3cac45ded1659f34bcb087b11e55bf34f7d" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.369704 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-d6gtj" Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.382915 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"658cd12d9f07c156c4109891f8cffdebd67dd481c5484968297c46198c6e86ac"} Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.382967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"1dbd4463f939bc6ac7c0536c5fcdab2da1cf8c3bc0f4c39885687a084f29143b"} Feb 17 00:43:18 crc kubenswrapper[4805]: I0217 00:43:18.382977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"a7283c456427c01f833895c4e142f5b0a0558bc92b86e4fc19bb8edb2197f28e"} Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636213 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636631 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e01a2e-86e0-449a-96d8-37396b137271" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636641 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e01a2e-86e0-449a-96d8-37396b137271" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636655 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90e7d39-95ba-4b97-ae51-1292c4c235cb" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636661 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90e7d39-95ba-4b97-ae51-1292c4c235cb" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636676 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce100ad8-844c-4b1d-8c16-6acce86b75d2" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636682 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce100ad8-844c-4b1d-8c16-6acce86b75d2" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636691 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde176ec-50b1-4a8a-8b8d-a652fc977aa5" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636697 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde176ec-50b1-4a8a-8b8d-a652fc977aa5" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636704 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab0a726b-21c1-4358-8a0e-a4d3af1222e0" containerName="ovn-config" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636710 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab0a726b-21c1-4358-8a0e-a4d3af1222e0" containerName="ovn-config" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636724 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd7278c-a746-4195-9d5e-035f100862db" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636729 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd7278c-a746-4195-9d5e-035f100862db" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636741 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0686c805-0a62-46a4-ae40-f3831191c403" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636748 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0686c805-0a62-46a4-ae40-f3831191c403" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636760 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca1b1ba7-b284-4f58-baff-840133925a82" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636765 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca1b1ba7-b284-4f58-baff-840133925a82" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636775 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca71e40f-60ca-4021-974f-0057bf0963cf" containerName="keystone-db-sync" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636780 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca71e40f-60ca-4021-974f-0057bf0963cf" containerName="keystone-db-sync" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636788 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f2fd03-808b-40ca-bea0-ac46f4f8770d" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636793 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f2fd03-808b-40ca-bea0-ac46f4f8770d" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: E0217 00:43:18.636802 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636809 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636963 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce100ad8-844c-4b1d-8c16-6acce86b75d2" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636988 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd7278c-a746-4195-9d5e-035f100862db" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.636996 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f2fd03-808b-40ca-bea0-ac46f4f8770d" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637007 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637020 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde176ec-50b1-4a8a-8b8d-a652fc977aa5" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637031 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca71e40f-60ca-4021-974f-0057bf0963cf" containerName="keystone-db-sync" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637039 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0686c805-0a62-46a4-ae40-f3831191c403" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637046 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90e7d39-95ba-4b97-ae51-1292c4c235cb" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637053 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e01a2e-86e0-449a-96d8-37396b137271" containerName="mariadb-account-create-update" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637059 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab0a726b-21c1-4358-8a0e-a4d3af1222e0" containerName="ovn-config" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.637071 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca1b1ba7-b284-4f58-baff-840133925a82" containerName="mariadb-database-create" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.638184 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:18.650228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.142974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.143369 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t9vd\" (UniqueName: \"kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.143441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.143489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.143558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.148387 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5sm6t"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.149495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.167200 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.168118 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xd9kt" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.168491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.168663 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.168910 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.179203 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5sm6t"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5ltl\" (UniqueName: \"kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t9vd\" (UniqueName: \"kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.246993 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.248034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.248756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.249444 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.250058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.258740 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-ztgpf"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.259977 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.270972 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.271300 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5dc2m" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.279386 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t9vd\" (UniqueName: \"kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd\") pod \"dnsmasq-dns-f877ddd87-srm7f\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.290668 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ztgpf"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.339405 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5ltpl"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5ltl\" (UniqueName: \"kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.348928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzws4\" (UniqueName: \"kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.349012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.349062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.349085 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.350594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.352594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.353009 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.353287 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-z7lbs" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.353445 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.353483 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.354085 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.355079 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.355086 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.431494 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"245f8982fdd200ca46b5de592d013fc6515e665e81852f8dbc1f752d726b1260"} Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.431758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"84d55c3ff58a6b247518411fcc40f692f57a9b86c32023589b248774b899ace1"} Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.492176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.494926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktw2h\" (UniqueName: \"kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.504044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzws4\" (UniqueName: \"kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.504154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.504199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.504300 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.505402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.515023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.528651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.545943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5ltl\" (UniqueName: \"kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl\") pod \"keystone-bootstrap-5sm6t\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.546401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzws4\" (UniqueName: \"kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4\") pod \"heat-db-sync-ztgpf\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.583761 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5ltpl"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.597992 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ztgpf" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.611704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktw2h\" (UniqueName: \"kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.611816 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.611944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.621571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.622157 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.643839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktw2h\" (UniqueName: \"kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h\") pod \"neutron-db-sync-5ltpl\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.658575 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-r8kk4"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.660103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.665736 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.665956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sfjx" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.666078 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.679514 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-r8kk4"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.694427 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fbvsz"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.695848 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.735055 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.735227 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrtqj" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.735313 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-64sw8"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.740542 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.746019 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b667f" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.746212 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.746485 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fbvsz"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.746690 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.764895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.775120 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-64sw8"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.858835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859091 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859212 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859278 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmp9q\" (UniqueName: \"kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859469 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppmgq\" (UniqueName: \"kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8588c\" (UniqueName: \"kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859632 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.859880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.910881 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.944573 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.964089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.964204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmp9q\" (UniqueName: \"kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.964287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppmgq\" (UniqueName: \"kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.964339 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.968430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8588c\" (UniqueName: \"kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.968515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.968574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.968617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.970963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.972090 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.979902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.982444 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.982550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.982945 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.983169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.986445 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.986630 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.988998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.990883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.991305 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.991339 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.998132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppmgq\" (UniqueName: \"kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq\") pod \"cinder-db-sync-r8kk4\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.998188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmp9q\" (UniqueName: \"kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q\") pod \"placement-db-sync-64sw8\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:19 crc kubenswrapper[4805]: I0217 00:43:19.999905 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.001237 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.071597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.072447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8588c\" (UniqueName: \"kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c\") pod \"barbican-db-sync-fbvsz\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.075676 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.077364 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.089547 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.091818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.094196 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.094392 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.113803 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.174359 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.179728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqk6d\" (UniqueName: \"kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.179792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.179820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.179854 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.179911 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.180040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.180069 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckp4\" (UniqueName: \"kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.180143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.180189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.186866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-64sw8" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.187865 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.187921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.187966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqk6d\" (UniqueName: \"kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lckp4\" (UniqueName: \"kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.289833 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.290844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.292433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.292665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.293355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.293771 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.293924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.313144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.313768 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lckp4\" (UniqueName: \"kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.314373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.314682 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqk6d\" (UniqueName: \"kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d\") pod \"dnsmasq-dns-68dcc9cf6f-7btc4\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.318345 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.319561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data\") pod \"ceilometer-0\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.464064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"2de012b437d76dd09cd5fe426ff72828205ff0d4b3c99696fb3070c0d9eb5ee3"} Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.481620 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.498755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.508464 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.653497 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fkb8r"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.654645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.661818 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.667451 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fkb8r"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.802104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpjq2\" (UniqueName: \"kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.802596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.905330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpjq2\" (UniqueName: \"kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.905513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.906829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.925090 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpjq2\" (UniqueName: \"kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2\") pod \"root-account-create-update-fkb8r\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.925905 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5sm6t"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.947896 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-r8kk4"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.962397 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:20 crc kubenswrapper[4805]: I0217 00:43:20.979821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.004838 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5ltpl"] Feb 17 00:43:21 crc kubenswrapper[4805]: W0217 00:43:21.022341 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaacb9ef7_b269_44c2_9b51_62067ea3545b.slice/crio-344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc WatchSource:0}: Error finding container 344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc: Status 404 returned error can't find the container with id 344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.023171 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-ztgpf"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.330413 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.352391 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fbvsz"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.365042 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-64sw8"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.382541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.563163 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-64sw8" event={"ID":"9ddd3866-a515-49a8-8b48-aa6981c7536e","Type":"ContainerStarted","Data":"3e3b3f7cd705388ace853b4e42ffb993ac873721022a90f9e33359da5ebc6102"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.594514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbvsz" event={"ID":"d265cd4b-2604-4a2e-902a-d31a861c2439","Type":"ContainerStarted","Data":"17099717ad2934614b7f86e7aa787608e51f196137218f4c470f47c01d2c7801"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.607982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" event={"ID":"cd6ab70f-a9b7-4a98-96ce-708064a35416","Type":"ContainerStarted","Data":"6188f4a734b1284370a6e8e589146f7c5a66c936d529e01ebd20da9abca18943"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.619327 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5sm6t" event={"ID":"39336cf6-958d-46fa-8c94-b501403aa9b6","Type":"ContainerStarted","Data":"4c2b9161305d579b77c902468b577bc9f14328e65092a3eedaeb88c74c8bd81a"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.620458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ztgpf" event={"ID":"aacb9ef7-b269-44c2-9b51-62067ea3545b","Type":"ContainerStarted","Data":"344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.621368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" event={"ID":"bb6bb971-92e4-4f0d-ac62-319ac77ea25f","Type":"ContainerStarted","Data":"488b5cd50f289e7136582f0707032823b2bdf1fa82661e4f03e01605119e4159"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.622287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerStarted","Data":"6b6f194c5248c5e8d48b368898c279216a2b050b1c4bb69ba6c2b656ee842960"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.624519 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ltpl" event={"ID":"1395fd63-af68-412a-9a95-f4ffde9dfe1c","Type":"ContainerStarted","Data":"a821755f3b0ace241f994017eb93c817163e4b2900a1d03a92a159d20509ce06"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.638878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"de228348-37d1-4ec0-9a47-11f4d895e6d6","Type":"ContainerStarted","Data":"e7982cb29f93913a805d7b3a35048353cda663c7b294dd37218413f6123643ca"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.682576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-r8kk4" event={"ID":"e89462a0-ccda-47cf-93e9-b8cd763c3b08","Type":"ContainerStarted","Data":"c2ed1c4eb1678d2e73799ee550e7ccff1e1ba08a11f16fbe791b7a28ca78138c"} Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.753644 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fkb8r"] Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.778164 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=46.918363346 podStartE2EDuration="58.778135292s" podCreationTimestamp="2026-02-17 00:42:23 +0000 UTC" firstStartedPulling="2026-02-17 00:43:05.944231351 +0000 UTC m=+1211.960040749" lastFinishedPulling="2026-02-17 00:43:17.804003287 +0000 UTC m=+1223.819812695" observedRunningTime="2026-02-17 00:43:21.703573092 +0000 UTC m=+1227.719382490" watchObservedRunningTime="2026-02-17 00:43:21.778135292 +0000 UTC m=+1227.793944690" Feb 17 00:43:21 crc kubenswrapper[4805]: I0217 00:43:21.813904 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.081768 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.139881 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.141421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.146323 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.166876 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.288549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.288896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.288923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.288941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.288965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.289000 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrksn\" (UniqueName: \"kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.393535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrksn\" (UniqueName: \"kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.394802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.395442 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.396058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.396606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.397103 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.416150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrksn\" (UniqueName: \"kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn\") pod \"dnsmasq-dns-58dd9ff6bc-28tfw\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.487880 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.705477 4805 generic.go:334] "Generic (PLEG): container finished" podID="cd6ab70f-a9b7-4a98-96ce-708064a35416" containerID="9f7f6c73392c7e156ffed7fec7f437683346708fd58a446e86aff9dd1451b607" exitCode=0 Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.705848 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" event={"ID":"cd6ab70f-a9b7-4a98-96ce-708064a35416","Type":"ContainerDied","Data":"9f7f6c73392c7e156ffed7fec7f437683346708fd58a446e86aff9dd1451b607"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.716012 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ltpl" event={"ID":"1395fd63-af68-412a-9a95-f4ffde9dfe1c","Type":"ContainerStarted","Data":"c8831532b27ea7ca1512d47b11d4e89cfd685557c4d240ea27a352504c5cd58a"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.730031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5sm6t" event={"ID":"39336cf6-958d-46fa-8c94-b501403aa9b6","Type":"ContainerStarted","Data":"cfcccfd5b15c29633353d469e79a73b1b9c56503e92879a49713d378e7117a44"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.745314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j7v5m" event={"ID":"38464d88-9f3b-485b-872a-98ed2ea8e3be","Type":"ContainerStarted","Data":"7a5150ef659fb0b7a550733ece273f6d466389a79a2dd970f196ac0271ddb0c5"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.755990 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5ltpl" podStartSLOduration=3.755971608 podStartE2EDuration="3.755971608s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:22.750328951 +0000 UTC m=+1228.766138349" watchObservedRunningTime="2026-02-17 00:43:22.755971608 +0000 UTC m=+1228.771781006" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.769593 4805 generic.go:334] "Generic (PLEG): container finished" podID="a384429f-1585-4ead-bbf6-ea810c568c88" containerID="98a643290c20c6631f5d15ced493a6bb73441d364a72041d6f42422843ed387f" exitCode=0 Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.769786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fkb8r" event={"ID":"a384429f-1585-4ead-bbf6-ea810c568c88","Type":"ContainerDied","Data":"98a643290c20c6631f5d15ced493a6bb73441d364a72041d6f42422843ed387f"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.769809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fkb8r" event={"ID":"a384429f-1585-4ead-bbf6-ea810c568c88","Type":"ContainerStarted","Data":"1bbeaaa5fbe2320cf7ce9abb70d399ab73af1e287e00a5535629b6d6e41943fb"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.784222 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5sm6t" podStartSLOduration=3.7842028709999997 podStartE2EDuration="3.784202871s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:22.775284953 +0000 UTC m=+1228.791094351" watchObservedRunningTime="2026-02-17 00:43:22.784202871 +0000 UTC m=+1228.800012269" Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.784578 4805 generic.go:334] "Generic (PLEG): container finished" podID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerID="e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b" exitCode=0 Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.785806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" event={"ID":"bb6bb971-92e4-4f0d-ac62-319ac77ea25f","Type":"ContainerDied","Data":"e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b"} Feb 17 00:43:22 crc kubenswrapper[4805]: I0217 00:43:22.800266 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-j7v5m" podStartSLOduration=3.59927085 podStartE2EDuration="40.800246546s" podCreationTimestamp="2026-02-17 00:42:42 +0000 UTC" firstStartedPulling="2026-02-17 00:42:43.781904069 +0000 UTC m=+1189.797713467" lastFinishedPulling="2026-02-17 00:43:20.982879765 +0000 UTC m=+1226.998689163" observedRunningTime="2026-02-17 00:43:22.793203271 +0000 UTC m=+1228.809012679" watchObservedRunningTime="2026-02-17 00:43:22.800246546 +0000 UTC m=+1228.816055944" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.046342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.076730 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.076772 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.296306 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.412928 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb\") pod \"cd6ab70f-a9b7-4a98-96ce-708064a35416\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.413025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t9vd\" (UniqueName: \"kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd\") pod \"cd6ab70f-a9b7-4a98-96ce-708064a35416\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.413091 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc\") pod \"cd6ab70f-a9b7-4a98-96ce-708064a35416\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.413120 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config\") pod \"cd6ab70f-a9b7-4a98-96ce-708064a35416\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.413173 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb\") pod \"cd6ab70f-a9b7-4a98-96ce-708064a35416\" (UID: \"cd6ab70f-a9b7-4a98-96ce-708064a35416\") " Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.550637 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config" (OuterVolumeSpecName: "config") pod "cd6ab70f-a9b7-4a98-96ce-708064a35416" (UID: "cd6ab70f-a9b7-4a98-96ce-708064a35416"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.550852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd6ab70f-a9b7-4a98-96ce-708064a35416" (UID: "cd6ab70f-a9b7-4a98-96ce-708064a35416"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.550983 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd6ab70f-a9b7-4a98-96ce-708064a35416" (UID: "cd6ab70f-a9b7-4a98-96ce-708064a35416"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.551984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd6ab70f-a9b7-4a98-96ce-708064a35416" (UID: "cd6ab70f-a9b7-4a98-96ce-708064a35416"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.552821 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd" (OuterVolumeSpecName: "kube-api-access-2t9vd") pod "cd6ab70f-a9b7-4a98-96ce-708064a35416" (UID: "cd6ab70f-a9b7-4a98-96ce-708064a35416"). InnerVolumeSpecName "kube-api-access-2t9vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.618848 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.618888 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t9vd\" (UniqueName: \"kubernetes.io/projected/cd6ab70f-a9b7-4a98-96ce-708064a35416-kube-api-access-2t9vd\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.618934 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.618946 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.618956 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd6ab70f-a9b7-4a98-96ce-708064a35416-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.795483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" event={"ID":"bb6bb971-92e4-4f0d-ac62-319ac77ea25f","Type":"ContainerStarted","Data":"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7"} Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.795658 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="dnsmasq-dns" containerID="cri-o://13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7" gracePeriod=10 Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.795925 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.798734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" event={"ID":"cd6ab70f-a9b7-4a98-96ce-708064a35416","Type":"ContainerDied","Data":"6188f4a734b1284370a6e8e589146f7c5a66c936d529e01ebd20da9abca18943"} Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.798788 4805 scope.go:117] "RemoveContainer" containerID="9f7f6c73392c7e156ffed7fec7f437683346708fd58a446e86aff9dd1451b607" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.798909 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-srm7f" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.810143 4805 generic.go:334] "Generic (PLEG): container finished" podID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerID="9cac03490ee02e6350a86cfb93e3ce4a1c6b0c4c6f5cdaaea21fa346adc96e57" exitCode=0 Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.810208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" event={"ID":"7207c2a4-875d-4f81-a311-0c0d495aea56","Type":"ContainerDied","Data":"9cac03490ee02e6350a86cfb93e3ce4a1c6b0c4c6f5cdaaea21fa346adc96e57"} Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.810236 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" event={"ID":"7207c2a4-875d-4f81-a311-0c0d495aea56","Type":"ContainerStarted","Data":"3d59be7c63093654cb33ef9168d2870ef6cc6b2ba1b827e17882d46007d7dde9"} Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.832799 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" podStartSLOduration=4.83278146 podStartE2EDuration="4.83278146s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:23.82376721 +0000 UTC m=+1229.839576608" watchObservedRunningTime="2026-02-17 00:43:23.83278146 +0000 UTC m=+1229.848590858" Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.964982 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:23 crc kubenswrapper[4805]: I0217 00:43:23.980060 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-srm7f"] Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.603973 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.777611 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts\") pod \"a384429f-1585-4ead-bbf6-ea810c568c88\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.777724 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpjq2\" (UniqueName: \"kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2\") pod \"a384429f-1585-4ead-bbf6-ea810c568c88\" (UID: \"a384429f-1585-4ead-bbf6-ea810c568c88\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.778186 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a384429f-1585-4ead-bbf6-ea810c568c88" (UID: "a384429f-1585-4ead-bbf6-ea810c568c88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.782636 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2" (OuterVolumeSpecName: "kube-api-access-wpjq2") pod "a384429f-1585-4ead-bbf6-ea810c568c88" (UID: "a384429f-1585-4ead-bbf6-ea810c568c88"). InnerVolumeSpecName "kube-api-access-wpjq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.786445 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.799768 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6ab70f-a9b7-4a98-96ce-708064a35416" path="/var/lib/kubelet/pods/cd6ab70f-a9b7-4a98-96ce-708064a35416/volumes" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.843031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" event={"ID":"7207c2a4-875d-4f81-a311-0c0d495aea56","Type":"ContainerStarted","Data":"690b145e3da1ae3c6955d4ec9b71175c34f56fc87478db4be94f2268acba1c29"} Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.843845 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.851472 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fkb8r" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.851815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fkb8r" event={"ID":"a384429f-1585-4ead-bbf6-ea810c568c88","Type":"ContainerDied","Data":"1bbeaaa5fbe2320cf7ce9abb70d399ab73af1e287e00a5535629b6d6e41943fb"} Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.851853 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bbeaaa5fbe2320cf7ce9abb70d399ab73af1e287e00a5535629b6d6e41943fb" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.876913 4805 generic.go:334] "Generic (PLEG): container finished" podID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerID="13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7" exitCode=0 Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.876993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" event={"ID":"bb6bb971-92e4-4f0d-ac62-319ac77ea25f","Type":"ContainerDied","Data":"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7"} Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.877021 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" event={"ID":"bb6bb971-92e4-4f0d-ac62-319ac77ea25f","Type":"ContainerDied","Data":"488b5cd50f289e7136582f0707032823b2bdf1fa82661e4f03e01605119e4159"} Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.877039 4805 scope.go:117] "RemoveContainer" containerID="13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.877141 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-7btc4" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.884760 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpjq2\" (UniqueName: \"kubernetes.io/projected/a384429f-1585-4ead-bbf6-ea810c568c88-kube-api-access-wpjq2\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.884791 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a384429f-1585-4ead-bbf6-ea810c568c88-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.903251 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" podStartSLOduration=2.903230863 podStartE2EDuration="2.903230863s" podCreationTimestamp="2026-02-17 00:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:24.898649305 +0000 UTC m=+1230.914458723" watchObservedRunningTime="2026-02-17 00:43:24.903230863 +0000 UTC m=+1230.919040261" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.906649 4805 scope.go:117] "RemoveContainer" containerID="e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.940042 4805 scope.go:117] "RemoveContainer" containerID="13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7" Feb 17 00:43:24 crc kubenswrapper[4805]: E0217 00:43:24.940696 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7\": container with ID starting with 13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7 not found: ID does not exist" containerID="13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.941216 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7"} err="failed to get container status \"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7\": rpc error: code = NotFound desc = could not find container \"13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7\": container with ID starting with 13a221d721aa882b1059d8cc0b6ff131e325c532d79d82576a6d3717b18be3b7 not found: ID does not exist" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.941254 4805 scope.go:117] "RemoveContainer" containerID="e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b" Feb 17 00:43:24 crc kubenswrapper[4805]: E0217 00:43:24.941897 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b\": container with ID starting with e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b not found: ID does not exist" containerID="e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.941919 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b"} err="failed to get container status \"e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b\": rpc error: code = NotFound desc = could not find container \"e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b\": container with ID starting with e531700c660cd466664aca7306cbf7817206f1723f724364519459d764d6937b not found: ID does not exist" Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.985458 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb\") pod \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.985497 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqk6d\" (UniqueName: \"kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d\") pod \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.986215 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc\") pod \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.986247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config\") pod \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.986462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb\") pod \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\" (UID: \"bb6bb971-92e4-4f0d-ac62-319ac77ea25f\") " Feb 17 00:43:24 crc kubenswrapper[4805]: I0217 00:43:24.991311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d" (OuterVolumeSpecName: "kube-api-access-gqk6d") pod "bb6bb971-92e4-4f0d-ac62-319ac77ea25f" (UID: "bb6bb971-92e4-4f0d-ac62-319ac77ea25f"). InnerVolumeSpecName "kube-api-access-gqk6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.032435 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb6bb971-92e4-4f0d-ac62-319ac77ea25f" (UID: "bb6bb971-92e4-4f0d-ac62-319ac77ea25f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.042496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config" (OuterVolumeSpecName: "config") pod "bb6bb971-92e4-4f0d-ac62-319ac77ea25f" (UID: "bb6bb971-92e4-4f0d-ac62-319ac77ea25f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.065048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bb6bb971-92e4-4f0d-ac62-319ac77ea25f" (UID: "bb6bb971-92e4-4f0d-ac62-319ac77ea25f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.074006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bb6bb971-92e4-4f0d-ac62-319ac77ea25f" (UID: "bb6bb971-92e4-4f0d-ac62-319ac77ea25f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.090138 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.090167 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.090179 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqk6d\" (UniqueName: \"kubernetes.io/projected/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-kube-api-access-gqk6d\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.090190 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.090201 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6bb971-92e4-4f0d-ac62-319ac77ea25f-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.219460 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:25 crc kubenswrapper[4805]: I0217 00:43:25.229308 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-7btc4"] Feb 17 00:43:26 crc kubenswrapper[4805]: I0217 00:43:26.806975 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" path="/var/lib/kubelet/pods/bb6bb971-92e4-4f0d-ac62-319ac77ea25f/volumes" Feb 17 00:43:26 crc kubenswrapper[4805]: I0217 00:43:26.908372 4805 generic.go:334] "Generic (PLEG): container finished" podID="39336cf6-958d-46fa-8c94-b501403aa9b6" containerID="cfcccfd5b15c29633353d469e79a73b1b9c56503e92879a49713d378e7117a44" exitCode=0 Feb 17 00:43:26 crc kubenswrapper[4805]: I0217 00:43:26.908470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5sm6t" event={"ID":"39336cf6-958d-46fa-8c94-b501403aa9b6","Type":"ContainerDied","Data":"cfcccfd5b15c29633353d469e79a73b1b9c56503e92879a49713d378e7117a44"} Feb 17 00:43:32 crc kubenswrapper[4805]: I0217 00:43:32.490597 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:43:32 crc kubenswrapper[4805]: I0217 00:43:32.561263 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:43:32 crc kubenswrapper[4805]: I0217 00:43:32.561694 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" containerID="cri-o://4ca0aba97e08c0fe815e72a9d3039ae9f0f2455400079df63e2fbde3b26ef4ec" gracePeriod=10 Feb 17 00:43:33 crc kubenswrapper[4805]: I0217 00:43:33.027376 4805 generic.go:334] "Generic (PLEG): container finished" podID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerID="4ca0aba97e08c0fe815e72a9d3039ae9f0f2455400079df63e2fbde3b26ef4ec" exitCode=0 Feb 17 00:43:33 crc kubenswrapper[4805]: I0217 00:43:33.027622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-m2qjt" event={"ID":"6dd9ba13-24f3-40a0-8354-a9e38c7d1368","Type":"ContainerDied","Data":"4ca0aba97e08c0fe815e72a9d3039ae9f0f2455400079df63e2fbde3b26ef4ec"} Feb 17 00:43:34 crc kubenswrapper[4805]: I0217 00:43:34.037277 4805 generic.go:334] "Generic (PLEG): container finished" podID="38464d88-9f3b-485b-872a-98ed2ea8e3be" containerID="7a5150ef659fb0b7a550733ece273f6d466389a79a2dd970f196ac0271ddb0c5" exitCode=0 Feb 17 00:43:34 crc kubenswrapper[4805]: I0217 00:43:34.037314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j7v5m" event={"ID":"38464d88-9f3b-485b-872a-98ed2ea8e3be","Type":"ContainerDied","Data":"7a5150ef659fb0b7a550733ece273f6d466389a79a2dd970f196ac0271ddb0c5"} Feb 17 00:43:34 crc kubenswrapper[4805]: I0217 00:43:34.044779 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 00:43:36 crc kubenswrapper[4805]: I0217 00:43:36.861676 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.012897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5ltl\" (UniqueName: \"kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.013100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.013286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.013740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.013777 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.013821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys\") pod \"39336cf6-958d-46fa-8c94-b501403aa9b6\" (UID: \"39336cf6-958d-46fa-8c94-b501403aa9b6\") " Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.018562 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts" (OuterVolumeSpecName: "scripts") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.019082 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.020730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl" (OuterVolumeSpecName: "kube-api-access-v5ltl") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "kube-api-access-v5ltl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.021816 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.049089 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.052332 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data" (OuterVolumeSpecName: "config-data") pod "39336cf6-958d-46fa-8c94-b501403aa9b6" (UID: "39336cf6-958d-46fa-8c94-b501403aa9b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.068905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5sm6t" event={"ID":"39336cf6-958d-46fa-8c94-b501403aa9b6","Type":"ContainerDied","Data":"4c2b9161305d579b77c902468b577bc9f14328e65092a3eedaeb88c74c8bd81a"} Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.068971 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2b9161305d579b77c902468b577bc9f14328e65092a3eedaeb88c74c8bd81a" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.068931 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5sm6t" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.117768 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5ltl\" (UniqueName: \"kubernetes.io/projected/39336cf6-958d-46fa-8c94-b501403aa9b6-kube-api-access-v5ltl\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.118172 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.118183 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.118191 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.118199 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.118206 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/39336cf6-958d-46fa-8c94-b501403aa9b6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.957195 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5sm6t"] Feb 17 00:43:37 crc kubenswrapper[4805]: I0217 00:43:37.968141 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5sm6t"] Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048083 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qb577"] Feb 17 00:43:38 crc kubenswrapper[4805]: E0217 00:43:38.048611 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a384429f-1585-4ead-bbf6-ea810c568c88" containerName="mariadb-account-create-update" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048630 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a384429f-1585-4ead-bbf6-ea810c568c88" containerName="mariadb-account-create-update" Feb 17 00:43:38 crc kubenswrapper[4805]: E0217 00:43:38.048654 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39336cf6-958d-46fa-8c94-b501403aa9b6" containerName="keystone-bootstrap" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048663 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="39336cf6-958d-46fa-8c94-b501403aa9b6" containerName="keystone-bootstrap" Feb 17 00:43:38 crc kubenswrapper[4805]: E0217 00:43:38.048681 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="dnsmasq-dns" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048689 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="dnsmasq-dns" Feb 17 00:43:38 crc kubenswrapper[4805]: E0217 00:43:38.048702 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ab70f-a9b7-4a98-96ce-708064a35416" containerName="init" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048710 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ab70f-a9b7-4a98-96ce-708064a35416" containerName="init" Feb 17 00:43:38 crc kubenswrapper[4805]: E0217 00:43:38.048724 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="init" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048731 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="init" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048933 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="39336cf6-958d-46fa-8c94-b501403aa9b6" containerName="keystone-bootstrap" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048964 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6ab70f-a9b7-4a98-96ce-708064a35416" containerName="init" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048983 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a384429f-1585-4ead-bbf6-ea810c568c88" containerName="mariadb-account-create-update" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.048998 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6bb971-92e4-4f0d-ac62-319ac77ea25f" containerName="dnsmasq-dns" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.049809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.051699 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.053760 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xd9kt" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.053984 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.054102 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.054443 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.065188 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb577"] Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.136750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7b79\" (UniqueName: \"kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.136810 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.136840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.137023 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.137211 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.137275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7b79\" (UniqueName: \"kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.239775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.244770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.249833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.249969 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.250277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.254126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.256125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7b79\" (UniqueName: \"kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79\") pod \"keystone-bootstrap-qb577\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.375127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb577" Feb 17 00:43:38 crc kubenswrapper[4805]: I0217 00:43:38.800274 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39336cf6-958d-46fa-8c94-b501403aa9b6" path="/var/lib/kubelet/pods/39336cf6-958d-46fa-8c94-b501403aa9b6/volumes" Feb 17 00:43:39 crc kubenswrapper[4805]: I0217 00:43:39.044832 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 00:43:40 crc kubenswrapper[4805]: I0217 00:43:40.113979 4805 generic.go:334] "Generic (PLEG): container finished" podID="1395fd63-af68-412a-9a95-f4ffde9dfe1c" containerID="c8831532b27ea7ca1512d47b11d4e89cfd685557c4d240ea27a352504c5cd58a" exitCode=0 Feb 17 00:43:40 crc kubenswrapper[4805]: I0217 00:43:40.114179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ltpl" event={"ID":"1395fd63-af68-412a-9a95-f4ffde9dfe1c","Type":"ContainerDied","Data":"c8831532b27ea7ca1512d47b11d4e89cfd685557c4d240ea27a352504c5cd58a"} Feb 17 00:43:44 crc kubenswrapper[4805]: I0217 00:43:44.045273 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 17 00:43:44 crc kubenswrapper[4805]: I0217 00:43:44.045784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:43:45 crc kubenswrapper[4805]: E0217 00:43:45.591156 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 17 00:43:45 crc kubenswrapper[4805]: E0217 00:43:45.591644 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n666h649h586hbbh67ch65h9fh64bh649h5c7h68dh55bh56fh67bh9bh5bh576h66ch598h59h8ch57ch74h657hc6hd4h64dh67bh5f7h597h58bh9dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lckp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ab1916fe-f237-4dd1-8af5-f18a52248311): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.378554 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.378895 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8588c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fbvsz_openstack(d265cd4b-2604-4a2e-902a-d31a861c2439): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.380176 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fbvsz" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.438806 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j7v5m" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.520895 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle\") pod \"38464d88-9f3b-485b-872a-98ed2ea8e3be\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.520991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data\") pod \"38464d88-9f3b-485b-872a-98ed2ea8e3be\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.521131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4cf2\" (UniqueName: \"kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2\") pod \"38464d88-9f3b-485b-872a-98ed2ea8e3be\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.521171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data\") pod \"38464d88-9f3b-485b-872a-98ed2ea8e3be\" (UID: \"38464d88-9f3b-485b-872a-98ed2ea8e3be\") " Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.525904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "38464d88-9f3b-485b-872a-98ed2ea8e3be" (UID: "38464d88-9f3b-485b-872a-98ed2ea8e3be"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.526639 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2" (OuterVolumeSpecName: "kube-api-access-s4cf2") pod "38464d88-9f3b-485b-872a-98ed2ea8e3be" (UID: "38464d88-9f3b-485b-872a-98ed2ea8e3be"). InnerVolumeSpecName "kube-api-access-s4cf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.556387 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38464d88-9f3b-485b-872a-98ed2ea8e3be" (UID: "38464d88-9f3b-485b-872a-98ed2ea8e3be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.589465 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data" (OuterVolumeSpecName: "config-data") pod "38464d88-9f3b-485b-872a-98ed2ea8e3be" (UID: "38464d88-9f3b-485b-872a-98ed2ea8e3be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.622652 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4cf2\" (UniqueName: \"kubernetes.io/projected/38464d88-9f3b-485b-872a-98ed2ea8e3be-kube-api-access-s4cf2\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.622686 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.622703 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.622713 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38464d88-9f3b-485b-872a-98ed2ea8e3be-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.833592 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.833783 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzws4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-ztgpf_openstack(aacb9ef7-b269-44c2-9b51-62067ea3545b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:43:46 crc kubenswrapper[4805]: E0217 00:43:46.835242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-ztgpf" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" Feb 17 00:43:46 crc kubenswrapper[4805]: I0217 00:43:46.901824 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.028849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktw2h\" (UniqueName: \"kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h\") pod \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.028915 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config\") pod \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.029130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle\") pod \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\" (UID: \"1395fd63-af68-412a-9a95-f4ffde9dfe1c\") " Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.032867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h" (OuterVolumeSpecName: "kube-api-access-ktw2h") pod "1395fd63-af68-412a-9a95-f4ffde9dfe1c" (UID: "1395fd63-af68-412a-9a95-f4ffde9dfe1c"). InnerVolumeSpecName "kube-api-access-ktw2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.059895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1395fd63-af68-412a-9a95-f4ffde9dfe1c" (UID: "1395fd63-af68-412a-9a95-f4ffde9dfe1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.076670 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config" (OuterVolumeSpecName: "config") pod "1395fd63-af68-412a-9a95-f4ffde9dfe1c" (UID: "1395fd63-af68-412a-9a95-f4ffde9dfe1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.131651 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.131681 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktw2h\" (UniqueName: \"kubernetes.io/projected/1395fd63-af68-412a-9a95-f4ffde9dfe1c-kube-api-access-ktw2h\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.131692 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1395fd63-af68-412a-9a95-f4ffde9dfe1c-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.183227 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j7v5m" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.183225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j7v5m" event={"ID":"38464d88-9f3b-485b-872a-98ed2ea8e3be","Type":"ContainerDied","Data":"076ffc8953438c609efd574e31720b724aa40611838224e3396e02b72ef5a5fe"} Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.183285 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076ffc8953438c609efd574e31720b724aa40611838224e3396e02b72ef5a5fe" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.186371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5ltpl" event={"ID":"1395fd63-af68-412a-9a95-f4ffde9dfe1c","Type":"ContainerDied","Data":"a821755f3b0ace241f994017eb93c817163e4b2900a1d03a92a159d20509ce06"} Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.186403 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a821755f3b0ace241f994017eb93c817163e4b2900a1d03a92a159d20509ce06" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.186506 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5ltpl" Feb 17 00:43:47 crc kubenswrapper[4805]: E0217 00:43:47.187620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-ztgpf" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" Feb 17 00:43:47 crc kubenswrapper[4805]: E0217 00:43:47.188130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-fbvsz" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.841484 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:47 crc kubenswrapper[4805]: E0217 00:43:47.842432 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" containerName="glance-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.842451 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" containerName="glance-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: E0217 00:43:47.842494 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1395fd63-af68-412a-9a95-f4ffde9dfe1c" containerName="neutron-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.842504 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1395fd63-af68-412a-9a95-f4ffde9dfe1c" containerName="neutron-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.842730 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" containerName="glance-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.842771 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1395fd63-af68-412a-9a95-f4ffde9dfe1c" containerName="neutron-db-sync" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.844795 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.870859 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t278s\" (UniqueName: \"kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946425 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:47 crc kubenswrapper[4805]: I0217 00:43:47.946667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t278s\" (UniqueName: \"kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048457 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.048546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.049290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.049763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.050554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.050698 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.051228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.093282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t278s\" (UniqueName: \"kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s\") pod \"dnsmasq-dns-785d8bcb8c-dnkf5\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.169873 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.225141 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.258529 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.260103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.266151 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-z7lbs" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.266356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.273656 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.273901 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.280990 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.282531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.310436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.365405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383207 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383345 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383742 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck4jd\" (UniqueName: \"kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9xh\" (UniqueName: \"kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.383955 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485550 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9xh\" (UniqueName: \"kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485703 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck4jd\" (UniqueName: \"kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485815 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.485859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.487004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.487359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.489178 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.489699 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.489806 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.490189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.490622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.491370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.491490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.513696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9xh\" (UniqueName: \"kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh\") pod \"dnsmasq-dns-55f844cf75-vjs6q\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.518030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck4jd\" (UniqueName: \"kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd\") pod \"neutron-549f7bcc7b-l2thx\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.603119 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:48 crc kubenswrapper[4805]: I0217 00:43:48.616841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.797248 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.926475 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb\") pod \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.926587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc\") pod \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.926634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config\") pod \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.926653 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppm2p\" (UniqueName: \"kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p\") pod \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.926783 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb\") pod \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\" (UID: \"6dd9ba13-24f3-40a0-8354-a9e38c7d1368\") " Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.934383 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p" (OuterVolumeSpecName: "kube-api-access-ppm2p") pod "6dd9ba13-24f3-40a0-8354-a9e38c7d1368" (UID: "6dd9ba13-24f3-40a0-8354-a9e38c7d1368"). InnerVolumeSpecName "kube-api-access-ppm2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.980807 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dd9ba13-24f3-40a0-8354-a9e38c7d1368" (UID: "6dd9ba13-24f3-40a0-8354-a9e38c7d1368"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.984506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dd9ba13-24f3-40a0-8354-a9e38c7d1368" (UID: "6dd9ba13-24f3-40a0-8354-a9e38c7d1368"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.989857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config" (OuterVolumeSpecName: "config") pod "6dd9ba13-24f3-40a0-8354-a9e38c7d1368" (UID: "6dd9ba13-24f3-40a0-8354-a9e38c7d1368"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:49 crc kubenswrapper[4805]: I0217 00:43:49.992605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dd9ba13-24f3-40a0-8354-a9e38c7d1368" (UID: "6dd9ba13-24f3-40a0-8354-a9e38c7d1368"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.028926 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.028975 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.028989 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.028997 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.029007 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppm2p\" (UniqueName: \"kubernetes.io/projected/6dd9ba13-24f3-40a0-8354-a9e38c7d1368-kube-api-access-ppm2p\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.148606 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qb577"] Feb 17 00:43:50 crc kubenswrapper[4805]: W0217 00:43:50.153682 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac778b90_57e0_42ae_b661_8d7418eb00c4.slice/crio-e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0 WatchSource:0}: Error finding container e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0: Status 404 returned error can't find the container with id e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0 Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.160913 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.249751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb577" event={"ID":"ac778b90-57e0-42ae-b661-8d7418eb00c4","Type":"ContainerStarted","Data":"e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0"} Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.251904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" event={"ID":"7c71d620-0f06-4b24-b647-98e1ea0004b1","Type":"ContainerStarted","Data":"1269fca751f20e3d98f6651efa43e40514ce6609f7a8288ac0e7f0da3e0e9fd4"} Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.257298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-m2qjt" event={"ID":"6dd9ba13-24f3-40a0-8354-a9e38c7d1368","Type":"ContainerDied","Data":"3a128f5f4b6089e0b56e25d3da882ca4f329dc1b755d48311c3e2a42879b8f95"} Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.257432 4805 scope.go:117] "RemoveContainer" containerID="4ca0aba97e08c0fe815e72a9d3039ae9f0f2455400079df63e2fbde3b26ef4ec" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.257547 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-m2qjt" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.289678 4805 scope.go:117] "RemoveContainer" containerID="1ef549c95fc1baaa43697702641077555045e0c7ed26ca1fbef6134366651cce" Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.344283 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:43:50 crc kubenswrapper[4805]: W0217 00:43:50.352434 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb47ec8eb_d04d_4ce7_93bc_1c557cd3ad81.slice/crio-05b12b3dfdc57a6df851385fb79255826d74fff27506d848fe0ce19f5b75185b WatchSource:0}: Error finding container 05b12b3dfdc57a6df851385fb79255826d74fff27506d848fe0ce19f5b75185b: Status 404 returned error can't find the container with id 05b12b3dfdc57a6df851385fb79255826d74fff27506d848fe0ce19f5b75185b Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.360881 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-m2qjt"] Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.372213 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:50 crc kubenswrapper[4805]: W0217 00:43:50.443628 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode07b33ca_66f5_4047_b754_ac637f0db5a5.slice/crio-22fb4d15018d0efcaded59a795f07c5576dcaf6d177b8ab9dff81c21f2548608 WatchSource:0}: Error finding container 22fb4d15018d0efcaded59a795f07c5576dcaf6d177b8ab9dff81c21f2548608: Status 404 returned error can't find the container with id 22fb4d15018d0efcaded59a795f07c5576dcaf6d177b8ab9dff81c21f2548608 Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.447141 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:43:50 crc kubenswrapper[4805]: I0217 00:43:50.796040 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" path="/var/lib/kubelet/pods/6dd9ba13-24f3-40a0-8354-a9e38c7d1368/volumes" Feb 17 00:43:51 crc kubenswrapper[4805]: I0217 00:43:51.273100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerStarted","Data":"22fb4d15018d0efcaded59a795f07c5576dcaf6d177b8ab9dff81c21f2548608"} Feb 17 00:43:51 crc kubenswrapper[4805]: I0217 00:43:51.275797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" event={"ID":"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81","Type":"ContainerStarted","Data":"05b12b3dfdc57a6df851385fb79255826d74fff27506d848fe0ce19f5b75185b"} Feb 17 00:43:52 crc kubenswrapper[4805]: E0217 00:43:52.620211 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 00:43:52 crc kubenswrapper[4805]: E0217 00:43:52.621482 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppmgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-r8kk4_openstack(e89462a0-ccda-47cf-93e9-b8cd763c3b08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 00:43:52 crc kubenswrapper[4805]: E0217 00:43:52.627900 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-r8kk4" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.079302 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.079371 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.299260 4805 generic.go:334] "Generic (PLEG): container finished" podID="b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" containerID="5ab3802f90bed18932984953628bceb7144d8612bcda9db4f6c8becf02c5439c" exitCode=0 Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.299319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" event={"ID":"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81","Type":"ContainerDied","Data":"5ab3802f90bed18932984953628bceb7144d8612bcda9db4f6c8becf02c5439c"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.301627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-64sw8" event={"ID":"9ddd3866-a515-49a8-8b48-aa6981c7536e","Type":"ContainerStarted","Data":"36680b14b252dc43ab1db9e9556ba6abcf9347b16cbcea4a985d74bca748cc78"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.318057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerStarted","Data":"b0f8890a90cf6fcb2ec4f0157f3bf038f6a5344d03eb2432da5b86681671390b"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.333023 4805 generic.go:334] "Generic (PLEG): container finished" podID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerID="412f80eadfd096862773228c45a0d8943aa3f8e2994f5b5999c7899a0024cba5" exitCode=0 Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.333092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" event={"ID":"7c71d620-0f06-4b24-b647-98e1ea0004b1","Type":"ContainerDied","Data":"412f80eadfd096862773228c45a0d8943aa3f8e2994f5b5999c7899a0024cba5"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.362579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb577" event={"ID":"ac778b90-57e0-42ae-b661-8d7418eb00c4","Type":"ContainerStarted","Data":"79176a8e77d9ea3f57f6a0804238aef2e7a723e97179966c5193e640f33c2e0c"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.394251 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-64sw8" podStartSLOduration=9.5212748 podStartE2EDuration="34.394233691s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="2026-02-17 00:43:21.413770937 +0000 UTC m=+1227.429580335" lastFinishedPulling="2026-02-17 00:43:46.286729798 +0000 UTC m=+1252.302539226" observedRunningTime="2026-02-17 00:43:53.370677219 +0000 UTC m=+1259.386486617" watchObservedRunningTime="2026-02-17 00:43:53.394233691 +0000 UTC m=+1259.410043089" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.428282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerStarted","Data":"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.428323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerStarted","Data":"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff"} Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.428687 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:43:53 crc kubenswrapper[4805]: E0217 00:43:53.440492 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-r8kk4" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.531238 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qb577" podStartSLOduration=15.531217583 podStartE2EDuration="15.531217583s" podCreationTimestamp="2026-02-17 00:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:53.471660828 +0000 UTC m=+1259.487470226" watchObservedRunningTime="2026-02-17 00:43:53.531217583 +0000 UTC m=+1259.547026981" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.570355 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-549f7bcc7b-l2thx" podStartSLOduration=5.570246011 podStartE2EDuration="5.570246011s" podCreationTimestamp="2026-02-17 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:53.525364499 +0000 UTC m=+1259.541173897" watchObservedRunningTime="2026-02-17 00:43:53.570246011 +0000 UTC m=+1259.586055409" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.799672 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924058 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924124 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t278s\" (UniqueName: \"kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.924405 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config\") pod \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\" (UID: \"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81\") " Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.932485 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s" (OuterVolumeSpecName: "kube-api-access-t278s") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "kube-api-access-t278s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.948446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.960855 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.961358 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:53 crc kubenswrapper[4805]: I0217 00:43:53.964714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config" (OuterVolumeSpecName: "config") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.026984 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t278s\" (UniqueName: \"kubernetes.io/projected/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-kube-api-access-t278s\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.027304 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.027314 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.027325 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.027344 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.047723 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.047829 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-m2qjt" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Feb 17 00:43:54 crc kubenswrapper[4805]: E0217 00:43:54.048104 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" containerName="init" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.048121 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" containerName="init" Feb 17 00:43:54 crc kubenswrapper[4805]: E0217 00:43:54.048146 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.048152 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" Feb 17 00:43:54 crc kubenswrapper[4805]: E0217 00:43:54.048172 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="init" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.048179 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="init" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.048387 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" containerName="init" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.048414 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd9ba13-24f3-40a0-8354-a9e38c7d1368" containerName="dnsmasq-dns" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.049453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.050880 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" (UID: "b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.054762 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.055406 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.086111 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129470 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129558 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129680 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkdt\" (UniqueName: \"kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.129971 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkdt\" (UniqueName: \"kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.231653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.236508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.238194 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.238302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.243976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.246066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.255123 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.274221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkdt\" (UniqueName: \"kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt\") pod \"neutron-699c4cfd75-pjgkq\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.371568 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.439077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" event={"ID":"7c71d620-0f06-4b24-b647-98e1ea0004b1","Type":"ContainerStarted","Data":"381d3ddac6be45bf607ade69403ada5024b6709e04a0738e354c5548ef642007"} Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.440293 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.442609 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.442820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-dnkf5" event={"ID":"b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81","Type":"ContainerDied","Data":"05b12b3dfdc57a6df851385fb79255826d74fff27506d848fe0ce19f5b75185b"} Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.442859 4805 scope.go:117] "RemoveContainer" containerID="5ab3802f90bed18932984953628bceb7144d8612bcda9db4f6c8becf02c5439c" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.476456 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" podStartSLOduration=6.476438084 podStartE2EDuration="6.476438084s" podCreationTimestamp="2026-02-17 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:54.46810245 +0000 UTC m=+1260.483911848" watchObservedRunningTime="2026-02-17 00:43:54.476438084 +0000 UTC m=+1260.492247482" Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.553696 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.574828 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-dnkf5"] Feb 17 00:43:54 crc kubenswrapper[4805]: I0217 00:43:54.806193 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81" path="/var/lib/kubelet/pods/b47ec8eb-d04d-4ce7-93bc-1c557cd3ad81/volumes" Feb 17 00:43:55 crc kubenswrapper[4805]: I0217 00:43:55.116366 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:43:55 crc kubenswrapper[4805]: I0217 00:43:55.473207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerStarted","Data":"17a416a85a870e6a61efb6f2fc8cb11aa366cb308ea76d674d24abc230271a1b"} Feb 17 00:43:55 crc kubenswrapper[4805]: I0217 00:43:55.473240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerStarted","Data":"cef012a50ce52c88cb178f6dc3d87d0cccada6811336ee178a02223213badd1e"} Feb 17 00:43:56 crc kubenswrapper[4805]: I0217 00:43:56.487414 4805 generic.go:334] "Generic (PLEG): container finished" podID="9ddd3866-a515-49a8-8b48-aa6981c7536e" containerID="36680b14b252dc43ab1db9e9556ba6abcf9347b16cbcea4a985d74bca748cc78" exitCode=0 Feb 17 00:43:56 crc kubenswrapper[4805]: I0217 00:43:56.487638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-64sw8" event={"ID":"9ddd3866-a515-49a8-8b48-aa6981c7536e","Type":"ContainerDied","Data":"36680b14b252dc43ab1db9e9556ba6abcf9347b16cbcea4a985d74bca748cc78"} Feb 17 00:43:56 crc kubenswrapper[4805]: I0217 00:43:56.493354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerStarted","Data":"9ffc5e90c21136ba6170e6476c4bcbdd636aed2614287db1aea84ae2e77dcb9b"} Feb 17 00:43:56 crc kubenswrapper[4805]: I0217 00:43:56.546436 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-699c4cfd75-pjgkq" podStartSLOduration=2.546414695 podStartE2EDuration="2.546414695s" podCreationTimestamp="2026-02-17 00:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:43:56.52775149 +0000 UTC m=+1262.543560888" watchObservedRunningTime="2026-02-17 00:43:56.546414695 +0000 UTC m=+1262.562224093" Feb 17 00:43:57 crc kubenswrapper[4805]: I0217 00:43:57.505035 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac778b90-57e0-42ae-b661-8d7418eb00c4" containerID="79176a8e77d9ea3f57f6a0804238aef2e7a723e97179966c5193e640f33c2e0c" exitCode=0 Feb 17 00:43:57 crc kubenswrapper[4805]: I0217 00:43:57.505177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb577" event={"ID":"ac778b90-57e0-42ae-b661-8d7418eb00c4","Type":"ContainerDied","Data":"79176a8e77d9ea3f57f6a0804238aef2e7a723e97179966c5193e640f33c2e0c"} Feb 17 00:43:57 crc kubenswrapper[4805]: I0217 00:43:57.505675 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:43:58 crc kubenswrapper[4805]: I0217 00:43:58.619487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:43:58 crc kubenswrapper[4805]: I0217 00:43:58.702511 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:43:58 crc kubenswrapper[4805]: I0217 00:43:58.702726 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="dnsmasq-dns" containerID="cri-o://690b145e3da1ae3c6955d4ec9b71175c34f56fc87478db4be94f2268acba1c29" gracePeriod=10 Feb 17 00:43:59 crc kubenswrapper[4805]: I0217 00:43:59.533459 4805 generic.go:334] "Generic (PLEG): container finished" podID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerID="690b145e3da1ae3c6955d4ec9b71175c34f56fc87478db4be94f2268acba1c29" exitCode=0 Feb 17 00:43:59 crc kubenswrapper[4805]: I0217 00:43:59.533542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" event={"ID":"7207c2a4-875d-4f81-a311-0c0d495aea56","Type":"ContainerDied","Data":"690b145e3da1ae3c6955d4ec9b71175c34f56fc87478db4be94f2268acba1c29"} Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.761880 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-64sw8" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.774704 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb577" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.779184 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.779263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data\") pod \"9ddd3866-a515-49a8-8b48-aa6981c7536e\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.779302 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts\") pod \"9ddd3866-a515-49a8-8b48-aa6981c7536e\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780262 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs\") pod \"9ddd3866-a515-49a8-8b48-aa6981c7536e\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780309 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780358 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7b79\" (UniqueName: \"kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780410 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmp9q\" (UniqueName: \"kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q\") pod \"9ddd3866-a515-49a8-8b48-aa6981c7536e\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780506 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle\") pod \"9ddd3866-a515-49a8-8b48-aa6981c7536e\" (UID: \"9ddd3866-a515-49a8-8b48-aa6981c7536e\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.780575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle\") pod \"ac778b90-57e0-42ae-b661-8d7418eb00c4\" (UID: \"ac778b90-57e0-42ae-b661-8d7418eb00c4\") " Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.782284 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs" (OuterVolumeSpecName: "logs") pod "9ddd3866-a515-49a8-8b48-aa6981c7536e" (UID: "9ddd3866-a515-49a8-8b48-aa6981c7536e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.801225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts" (OuterVolumeSpecName: "scripts") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.801798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.801933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts" (OuterVolumeSpecName: "scripts") pod "9ddd3866-a515-49a8-8b48-aa6981c7536e" (UID: "9ddd3866-a515-49a8-8b48-aa6981c7536e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.802065 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79" (OuterVolumeSpecName: "kube-api-access-v7b79") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "kube-api-access-v7b79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.817099 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q" (OuterVolumeSpecName: "kube-api-access-cmp9q") pod "9ddd3866-a515-49a8-8b48-aa6981c7536e" (UID: "9ddd3866-a515-49a8-8b48-aa6981c7536e"). InnerVolumeSpecName "kube-api-access-cmp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.849638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ddd3866-a515-49a8-8b48-aa6981c7536e" (UID: "9ddd3866-a515-49a8-8b48-aa6981c7536e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.852428 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data" (OuterVolumeSpecName: "config-data") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.861668 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.871617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ac778b90-57e0-42ae-b661-8d7418eb00c4" (UID: "ac778b90-57e0-42ae-b661-8d7418eb00c4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.875032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data" (OuterVolumeSpecName: "config-data") pod "9ddd3866-a515-49a8-8b48-aa6981c7536e" (UID: "9ddd3866-a515-49a8-8b48-aa6981c7536e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883496 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883527 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883537 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883587 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883595 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ddd3866-a515-49a8-8b48-aa6981c7536e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883604 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883612 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ddd3866-a515-49a8-8b48-aa6981c7536e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883636 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883647 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7b79\" (UniqueName: \"kubernetes.io/projected/ac778b90-57e0-42ae-b661-8d7418eb00c4-kube-api-access-v7b79\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883657 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmp9q\" (UniqueName: \"kubernetes.io/projected/9ddd3866-a515-49a8-8b48-aa6981c7536e-kube-api-access-cmp9q\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:00 crc kubenswrapper[4805]: I0217 00:44:00.883665 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac778b90-57e0-42ae-b661-8d7418eb00c4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.083750 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087092 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087171 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087223 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrksn\" (UniqueName: \"kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.087516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb\") pod \"7207c2a4-875d-4f81-a311-0c0d495aea56\" (UID: \"7207c2a4-875d-4f81-a311-0c0d495aea56\") " Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.097549 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn" (OuterVolumeSpecName: "kube-api-access-mrksn") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "kube-api-access-mrksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.156001 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.186897 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.201748 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.201785 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.201799 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrksn\" (UniqueName: \"kubernetes.io/projected/7207c2a4-875d-4f81-a311-0c0d495aea56-kube-api-access-mrksn\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.202969 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.209738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config" (OuterVolumeSpecName: "config") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.222900 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7207c2a4-875d-4f81-a311-0c0d495aea56" (UID: "7207c2a4-875d-4f81-a311-0c0d495aea56"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.303410 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.303483 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.303496 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7207c2a4-875d-4f81-a311-0c0d495aea56-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.555810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-64sw8" event={"ID":"9ddd3866-a515-49a8-8b48-aa6981c7536e","Type":"ContainerDied","Data":"3e3b3f7cd705388ace853b4e42ffb993ac873721022a90f9e33359da5ebc6102"} Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.555848 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3b3f7cd705388ace853b4e42ffb993ac873721022a90f9e33359da5ebc6102" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.555904 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-64sw8" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.560241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerStarted","Data":"6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5"} Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.562757 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" event={"ID":"7207c2a4-875d-4f81-a311-0c0d495aea56","Type":"ContainerDied","Data":"3d59be7c63093654cb33ef9168d2870ef6cc6b2ba1b827e17882d46007d7dde9"} Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.562795 4805 scope.go:117] "RemoveContainer" containerID="690b145e3da1ae3c6955d4ec9b71175c34f56fc87478db4be94f2268acba1c29" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.562906 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-28tfw" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.566262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qb577" event={"ID":"ac778b90-57e0-42ae-b661-8d7418eb00c4","Type":"ContainerDied","Data":"e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0"} Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.566294 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ab8f85b4709b14668fc49427cf68e8520cd036dd70403908ec039b7bffaac0" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.566301 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qb577" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.568582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ztgpf" event={"ID":"aacb9ef7-b269-44c2-9b51-62067ea3545b","Type":"ContainerStarted","Data":"a4b6d3b9acf976a3b824591e2e345591c3ae1f9b703ce1320ac7a1b395415efa"} Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.592492 4805 scope.go:117] "RemoveContainer" containerID="9cac03490ee02e6350a86cfb93e3ce4a1c6b0c4c6f5cdaaea21fa346adc96e57" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.598145 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-ztgpf" podStartSLOduration=2.507353046 podStartE2EDuration="42.598128857s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="2026-02-17 00:43:21.031741731 +0000 UTC m=+1227.047551129" lastFinishedPulling="2026-02-17 00:44:01.122517542 +0000 UTC m=+1267.138326940" observedRunningTime="2026-02-17 00:44:01.592702594 +0000 UTC m=+1267.608511992" watchObservedRunningTime="2026-02-17 00:44:01.598128857 +0000 UTC m=+1267.613938255" Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.621714 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:44:01 crc kubenswrapper[4805]: I0217 00:44:01.631550 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-28tfw"] Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.014876 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d5b44676f-vbgmb"] Feb 17 00:44:02 crc kubenswrapper[4805]: E0217 00:44:02.015777 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="dnsmasq-dns" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.015791 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="dnsmasq-dns" Feb 17 00:44:02 crc kubenswrapper[4805]: E0217 00:44:02.015808 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddd3866-a515-49a8-8b48-aa6981c7536e" containerName="placement-db-sync" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.015815 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddd3866-a515-49a8-8b48-aa6981c7536e" containerName="placement-db-sync" Feb 17 00:44:02 crc kubenswrapper[4805]: E0217 00:44:02.015837 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac778b90-57e0-42ae-b661-8d7418eb00c4" containerName="keystone-bootstrap" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.015845 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac778b90-57e0-42ae-b661-8d7418eb00c4" containerName="keystone-bootstrap" Feb 17 00:44:02 crc kubenswrapper[4805]: E0217 00:44:02.015858 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="init" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.015864 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="init" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.016040 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac778b90-57e0-42ae-b661-8d7418eb00c4" containerName="keystone-bootstrap" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.016056 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" containerName="dnsmasq-dns" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.016079 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddd3866-a515-49a8-8b48-aa6981c7536e" containerName="placement-db-sync" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.016744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022198 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022361 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022534 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022650 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022733 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.022819 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xd9kt" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-credential-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-scripts\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098506 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-combined-ca-bundle\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-internal-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7szb\" (UniqueName: \"kubernetes.io/projected/b74fe76f-17fb-498c-a46c-088c2df512d5-kube-api-access-n7szb\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-public-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-fernet-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.098733 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-config-data\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.150533 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d5b44676f-vbgmb"] Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.200780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7szb\" (UniqueName: \"kubernetes.io/projected/b74fe76f-17fb-498c-a46c-088c2df512d5-kube-api-access-n7szb\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.200836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-public-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.200863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-fernet-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.201075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-config-data\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.201243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-credential-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.201358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-scripts\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.201409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-combined-ca-bundle\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.201489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-internal-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.205589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-fernet-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.207029 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-scripts\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.207288 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-combined-ca-bundle\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.208763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-config-data\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.208965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-public-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.210868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-internal-tls-certs\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.218770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b74fe76f-17fb-498c-a46c-088c2df512d5-credential-keys\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.221963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7szb\" (UniqueName: \"kubernetes.io/projected/b74fe76f-17fb-498c-a46c-088c2df512d5-kube-api-access-n7szb\") pod \"keystone-7d5b44676f-vbgmb\" (UID: \"b74fe76f-17fb-498c-a46c-088c2df512d5\") " pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.414449 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.669548 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65599f5544-8m95b"] Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.671256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.699805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.701228 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.707263 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b667f" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.707510 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.707669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712195 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-combined-ca-bundle\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-config-data\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-internal-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-logs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-public-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4d8m\" (UniqueName: \"kubernetes.io/projected/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-kube-api-access-r4d8m\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.712507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-scripts\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.755430 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65599f5544-8m95b"] Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.814979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-combined-ca-bundle\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.815397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-config-data\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.815520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-internal-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.815717 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-logs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.815880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4d8m\" (UniqueName: \"kubernetes.io/projected/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-kube-api-access-r4d8m\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.815991 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-public-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.816083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-scripts\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.816505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-logs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.818041 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7207c2a4-875d-4f81-a311-0c0d495aea56" path="/var/lib/kubelet/pods/7207c2a4-875d-4f81-a311-0c0d495aea56/volumes" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.828562 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-combined-ca-bundle\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.828906 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-config-data\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.829438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-public-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.834577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4d8m\" (UniqueName: \"kubernetes.io/projected/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-kube-api-access-r4d8m\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.834950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-scripts\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.843023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab220f14-8200-4576-a0bf-ee0bc1d2e11e-internal-tls-certs\") pod \"placement-65599f5544-8m95b\" (UID: \"ab220f14-8200-4576-a0bf-ee0bc1d2e11e\") " pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:02 crc kubenswrapper[4805]: W0217 00:44:02.950980 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb74fe76f_17fb_498c_a46c_088c2df512d5.slice/crio-3fcdacffd6a78cebb123bc3164e71daba5c5059bad204e7077fdcde4a1ba6d7e WatchSource:0}: Error finding container 3fcdacffd6a78cebb123bc3164e71daba5c5059bad204e7077fdcde4a1ba6d7e: Status 404 returned error can't find the container with id 3fcdacffd6a78cebb123bc3164e71daba5c5059bad204e7077fdcde4a1ba6d7e Feb 17 00:44:02 crc kubenswrapper[4805]: I0217 00:44:02.963037 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d5b44676f-vbgmb"] Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.017364 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.506477 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65599f5544-8m95b"] Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.621866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d5b44676f-vbgmb" event={"ID":"b74fe76f-17fb-498c-a46c-088c2df512d5","Type":"ContainerStarted","Data":"369d40c7455a9a2bd13a79b9da19192014aa2edca28dd80787ca601f8577760c"} Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.621906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d5b44676f-vbgmb" event={"ID":"b74fe76f-17fb-498c-a46c-088c2df512d5","Type":"ContainerStarted","Data":"3fcdacffd6a78cebb123bc3164e71daba5c5059bad204e7077fdcde4a1ba6d7e"} Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.622025 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.623126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65599f5544-8m95b" event={"ID":"ab220f14-8200-4576-a0bf-ee0bc1d2e11e","Type":"ContainerStarted","Data":"b1ed3f0e8590bc5ab586bf781d7890870f1621c84a70fbff0dfc6cab219b2375"} Feb 17 00:44:03 crc kubenswrapper[4805]: I0217 00:44:03.658682 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d5b44676f-vbgmb" podStartSLOduration=2.658665862 podStartE2EDuration="2.658665862s" podCreationTimestamp="2026-02-17 00:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:03.636423016 +0000 UTC m=+1269.652232424" watchObservedRunningTime="2026-02-17 00:44:03.658665862 +0000 UTC m=+1269.674475260" Feb 17 00:44:04 crc kubenswrapper[4805]: I0217 00:44:04.634278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65599f5544-8m95b" event={"ID":"ab220f14-8200-4576-a0bf-ee0bc1d2e11e","Type":"ContainerStarted","Data":"22df685e479f38a0edca6e3bea9a8a62482585d5b4a41e565fb96fdd1c09200a"} Feb 17 00:44:05 crc kubenswrapper[4805]: I0217 00:44:05.641935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbvsz" event={"ID":"d265cd4b-2604-4a2e-902a-d31a861c2439","Type":"ContainerStarted","Data":"89b28ea93899aa41bad44f2b915dce5f20e3f498b809ed9b33e107bfe115f4f1"} Feb 17 00:44:05 crc kubenswrapper[4805]: I0217 00:44:05.645361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65599f5544-8m95b" event={"ID":"ab220f14-8200-4576-a0bf-ee0bc1d2e11e","Type":"ContainerStarted","Data":"b84c40c36a4196eb3a27dd4a0b0eb9a2fbb96e6bc68aaf2f6ddfc0b94afbf817"} Feb 17 00:44:05 crc kubenswrapper[4805]: I0217 00:44:05.645495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:05 crc kubenswrapper[4805]: I0217 00:44:05.665127 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fbvsz" podStartSLOduration=2.9788601420000003 podStartE2EDuration="46.665110487s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="2026-02-17 00:43:21.486504906 +0000 UTC m=+1227.502314304" lastFinishedPulling="2026-02-17 00:44:05.172755251 +0000 UTC m=+1271.188564649" observedRunningTime="2026-02-17 00:44:05.659635433 +0000 UTC m=+1271.675444831" watchObservedRunningTime="2026-02-17 00:44:05.665110487 +0000 UTC m=+1271.680919885" Feb 17 00:44:05 crc kubenswrapper[4805]: I0217 00:44:05.692307 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65599f5544-8m95b" podStartSLOduration=3.690988304 podStartE2EDuration="3.690988304s" podCreationTimestamp="2026-02-17 00:44:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:05.680647104 +0000 UTC m=+1271.696456512" watchObservedRunningTime="2026-02-17 00:44:05.690988304 +0000 UTC m=+1271.706797702" Feb 17 00:44:06 crc kubenswrapper[4805]: I0217 00:44:06.655936 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:09 crc kubenswrapper[4805]: I0217 00:44:09.695686 4805 generic.go:334] "Generic (PLEG): container finished" podID="d265cd4b-2604-4a2e-902a-d31a861c2439" containerID="89b28ea93899aa41bad44f2b915dce5f20e3f498b809ed9b33e107bfe115f4f1" exitCode=0 Feb 17 00:44:09 crc kubenswrapper[4805]: I0217 00:44:09.695769 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbvsz" event={"ID":"d265cd4b-2604-4a2e-902a-d31a861c2439","Type":"ContainerDied","Data":"89b28ea93899aa41bad44f2b915dce5f20e3f498b809ed9b33e107bfe115f4f1"} Feb 17 00:44:10 crc kubenswrapper[4805]: I0217 00:44:10.711255 4805 generic.go:334] "Generic (PLEG): container finished" podID="aacb9ef7-b269-44c2-9b51-62067ea3545b" containerID="a4b6d3b9acf976a3b824591e2e345591c3ae1f9b703ce1320ac7a1b395415efa" exitCode=0 Feb 17 00:44:10 crc kubenswrapper[4805]: I0217 00:44:10.711646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ztgpf" event={"ID":"aacb9ef7-b269-44c2-9b51-62067ea3545b","Type":"ContainerDied","Data":"a4b6d3b9acf976a3b824591e2e345591c3ae1f9b703ce1320ac7a1b395415efa"} Feb 17 00:44:10 crc kubenswrapper[4805]: I0217 00:44:10.715023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-r8kk4" event={"ID":"e89462a0-ccda-47cf-93e9-b8cd763c3b08","Type":"ContainerStarted","Data":"f8193068ea49b80a759fcc4f57663e132a889f3763ab6c888e8bcb88ccc7044a"} Feb 17 00:44:10 crc kubenswrapper[4805]: I0217 00:44:10.780258 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-r8kk4" podStartSLOduration=3.360491659 podStartE2EDuration="51.780235411s" podCreationTimestamp="2026-02-17 00:43:19 +0000 UTC" firstStartedPulling="2026-02-17 00:43:20.975989854 +0000 UTC m=+1226.991799252" lastFinishedPulling="2026-02-17 00:44:09.395733606 +0000 UTC m=+1275.411543004" observedRunningTime="2026-02-17 00:44:10.766085643 +0000 UTC m=+1276.781895041" watchObservedRunningTime="2026-02-17 00:44:10.780235411 +0000 UTC m=+1276.796044829" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.163351 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.254722 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8588c\" (UniqueName: \"kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c\") pod \"d265cd4b-2604-4a2e-902a-d31a861c2439\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.254807 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data\") pod \"d265cd4b-2604-4a2e-902a-d31a861c2439\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.254999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle\") pod \"d265cd4b-2604-4a2e-902a-d31a861c2439\" (UID: \"d265cd4b-2604-4a2e-902a-d31a861c2439\") " Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.259705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c" (OuterVolumeSpecName: "kube-api-access-8588c") pod "d265cd4b-2604-4a2e-902a-d31a861c2439" (UID: "d265cd4b-2604-4a2e-902a-d31a861c2439"). InnerVolumeSpecName "kube-api-access-8588c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.259786 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d265cd4b-2604-4a2e-902a-d31a861c2439" (UID: "d265cd4b-2604-4a2e-902a-d31a861c2439"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.283972 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d265cd4b-2604-4a2e-902a-d31a861c2439" (UID: "d265cd4b-2604-4a2e-902a-d31a861c2439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.357026 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.357056 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8588c\" (UniqueName: \"kubernetes.io/projected/d265cd4b-2604-4a2e-902a-d31a861c2439-kube-api-access-8588c\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.357066 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d265cd4b-2604-4a2e-902a-d31a861c2439-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.759195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fbvsz" event={"ID":"d265cd4b-2604-4a2e-902a-d31a861c2439","Type":"ContainerDied","Data":"17099717ad2934614b7f86e7aa787608e51f196137218f4c470f47c01d2c7801"} Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.759734 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17099717ad2934614b7f86e7aa787608e51f196137218f4c470f47c01d2c7801" Feb 17 00:44:11 crc kubenswrapper[4805]: I0217 00:44:11.759264 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fbvsz" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.149428 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b88c58c9-fwsz2"] Feb 17 00:44:12 crc kubenswrapper[4805]: E0217 00:44:12.149838 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" containerName="barbican-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.149855 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" containerName="barbican-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.150046 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" containerName="barbican-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.151028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.160896 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-584fd88cbb-md2tp"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.161976 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.162513 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.184532 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.197682 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.197861 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrtqj" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.214385 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b88c58c9-fwsz2"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281345 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcafb85-5938-470c-90a7-acfb359882af-logs\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data-custom\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-combined-ca-bundle\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-combined-ca-bundle\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data-custom\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281561 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqms\" (UniqueName: \"kubernetes.io/projected/911a5d99-5b74-4633-9d7a-40bee6bb01a4-kube-api-access-5pqms\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zvm\" (UniqueName: \"kubernetes.io/projected/efcafb85-5938-470c-90a7-acfb359882af-kube-api-access-m2zvm\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281600 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911a5d99-5b74-4633-9d7a-40bee6bb01a4-logs\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.281656 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.300392 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-584fd88cbb-md2tp"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqms\" (UniqueName: \"kubernetes.io/projected/911a5d99-5b74-4633-9d7a-40bee6bb01a4-kube-api-access-5pqms\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2zvm\" (UniqueName: \"kubernetes.io/projected/efcafb85-5938-470c-90a7-acfb359882af-kube-api-access-m2zvm\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911a5d99-5b74-4633-9d7a-40bee6bb01a4-logs\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcafb85-5938-470c-90a7-acfb359882af-logs\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383787 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data-custom\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.383937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-combined-ca-bundle\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.384007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-combined-ca-bundle\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.384129 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data-custom\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.386746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efcafb85-5938-470c-90a7-acfb359882af-logs\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.387339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911a5d99-5b74-4633-9d7a-40bee6bb01a4-logs\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.389002 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data-custom\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.394134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data-custom\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.405430 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-config-data\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.406319 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-combined-ca-bundle\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.407225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efcafb85-5938-470c-90a7-acfb359882af-config-data\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.408238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911a5d99-5b74-4633-9d7a-40bee6bb01a4-combined-ca-bundle\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.409882 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqms\" (UniqueName: \"kubernetes.io/projected/911a5d99-5b74-4633-9d7a-40bee6bb01a4-kube-api-access-5pqms\") pod \"barbican-worker-5b88c58c9-fwsz2\" (UID: \"911a5d99-5b74-4633-9d7a-40bee6bb01a4\") " pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.438396 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.439990 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.446987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2zvm\" (UniqueName: \"kubernetes.io/projected/efcafb85-5938-470c-90a7-acfb359882af-kube-api-access-m2zvm\") pod \"barbican-keystone-listener-584fd88cbb-md2tp\" (UID: \"efcafb85-5938-470c-90a7-acfb359882af\") " pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.546507 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b88c58c9-fwsz2" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44svg\" (UniqueName: \"kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593149 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.593253 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.595812 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.612188 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.692169 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ztgpf" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.697925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.698005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.698024 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.698041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.698086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44svg\" (UniqueName: \"kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.698133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.699791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.701215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.701892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.706868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.728135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.742565 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:12 crc kubenswrapper[4805]: E0217 00:44:12.743095 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" containerName="heat-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.743112 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" containerName="heat-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.743134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44svg\" (UniqueName: \"kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg\") pod \"dnsmasq-dns-85ff748b95-z2dtt\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.743382 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" containerName="heat-db-sync" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.744636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.746831 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.754764 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.799822 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle\") pod \"aacb9ef7-b269-44c2-9b51-62067ea3545b\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800187 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data\") pod \"aacb9ef7-b269-44c2-9b51-62067ea3545b\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzws4\" (UniqueName: \"kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4\") pod \"aacb9ef7-b269-44c2-9b51-62067ea3545b\" (UID: \"aacb9ef7-b269-44c2-9b51-62067ea3545b\") " Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj2j\" (UniqueName: \"kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800616 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.800741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.819523 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4" (OuterVolumeSpecName: "kube-api-access-qzws4") pod "aacb9ef7-b269-44c2-9b51-62067ea3545b" (UID: "aacb9ef7-b269-44c2-9b51-62067ea3545b"). InnerVolumeSpecName "kube-api-access-qzws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.823965 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-ztgpf" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.829980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-ztgpf" event={"ID":"aacb9ef7-b269-44c2-9b51-62067ea3545b","Type":"ContainerDied","Data":"344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc"} Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.830057 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344df2054c35cfb63e910b1365887ec79e685330a77a4ce5306577a4ec525cfc" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.859444 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aacb9ef7-b269-44c2-9b51-62067ea3545b" (UID: "aacb9ef7-b269-44c2-9b51-62067ea3545b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.903701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data" (OuterVolumeSpecName: "config-data") pod "aacb9ef7-b269-44c2-9b51-62067ea3545b" (UID: "aacb9ef7-b269-44c2-9b51-62067ea3545b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj2j\" (UniqueName: \"kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906907 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906921 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzws4\" (UniqueName: \"kubernetes.io/projected/aacb9ef7-b269-44c2-9b51-62067ea3545b-kube-api-access-qzws4\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.906935 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aacb9ef7-b269-44c2-9b51-62067ea3545b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.908188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.912590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.914938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.937748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.947100 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj2j\" (UniqueName: \"kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j\") pod \"barbican-api-7456cd9cc6-8fjxw\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:12 crc kubenswrapper[4805]: I0217 00:44:12.988737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.092010 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.217159 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b88c58c9-fwsz2"] Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.333059 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-584fd88cbb-md2tp"] Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.496835 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.640345 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.841564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88c58c9-fwsz2" event={"ID":"911a5d99-5b74-4633-9d7a-40bee6bb01a4","Type":"ContainerStarted","Data":"20f500f7d377ce21eca6171d526b71d77df826075f84f34ebc7ac6a88e0e1a9f"} Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.845177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" event={"ID":"efcafb85-5938-470c-90a7-acfb359882af","Type":"ContainerStarted","Data":"c010eab49b9339641116d62b7b36997766ba1ad6eed2accf4f39a3c9c62c6600"} Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.846839 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerStarted","Data":"353fe59021006bf5b7f00d6928a6c83562ba99af62a211d7113d8b17088dfed7"} Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.846887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerStarted","Data":"8ec702b697b928e4607c72b1d12b8f19361046ffa1829bf181f1b7667f2a51a8"} Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.849247 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerStarted","Data":"29f63ef586bc762b2afe3033b21207dfb1dfaafa66ee01053e5c97853dd1cf1a"} Feb 17 00:44:13 crc kubenswrapper[4805]: I0217 00:44:13.849280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerStarted","Data":"6b4031d80b90e6998f6bcbedd0b7ad1dfa91fd0c8a44059fcf973d29e849a423"} Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.873868 4805 generic.go:334] "Generic (PLEG): container finished" podID="0403d039-d577-4378-932a-7908a75858fe" containerID="29f63ef586bc762b2afe3033b21207dfb1dfaafa66ee01053e5c97853dd1cf1a" exitCode=0 Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.874107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerDied","Data":"29f63ef586bc762b2afe3033b21207dfb1dfaafa66ee01053e5c97853dd1cf1a"} Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.874130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerStarted","Data":"db5ff1bb39086877ae4a92ac1ac7e8d7892e9ce68fc645e91756001722c732d1"} Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.874858 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.884112 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerStarted","Data":"dac8e926e2499467d06ebe904129fcf110049a1f0f32c11de97cfa4942984957"} Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.884348 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.884395 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.928302 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" podStartSLOduration=2.928270419 podStartE2EDuration="2.928270419s" podCreationTimestamp="2026-02-17 00:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:14.920185652 +0000 UTC m=+1280.935995050" watchObservedRunningTime="2026-02-17 00:44:14.928270419 +0000 UTC m=+1280.944079817" Feb 17 00:44:14 crc kubenswrapper[4805]: I0217 00:44:14.977179 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7456cd9cc6-8fjxw" podStartSLOduration=2.9771589240000003 podStartE2EDuration="2.977158924s" podCreationTimestamp="2026-02-17 00:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:14.961105473 +0000 UTC m=+1280.976914881" watchObservedRunningTime="2026-02-17 00:44:14.977158924 +0000 UTC m=+1280.992968322" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.405097 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dc7fccf86-pqgwz"] Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.407068 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.410042 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.410285 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.427372 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc7fccf86-pqgwz"] Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.490743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data-custom\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.490816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-combined-ca-bundle\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.490925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.490956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-public-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.491013 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-internal-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.491047 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9sz\" (UniqueName: \"kubernetes.io/projected/c8311529-9b2c-449c-8086-387c3935bbd6-kube-api-access-dw9sz\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.491099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8311529-9b2c-449c-8086-387c3935bbd6-logs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9sz\" (UniqueName: \"kubernetes.io/projected/c8311529-9b2c-449c-8086-387c3935bbd6-kube-api-access-dw9sz\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8311529-9b2c-449c-8086-387c3935bbd6-logs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data-custom\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-combined-ca-bundle\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-public-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.593618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-internal-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.594941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8311529-9b2c-449c-8086-387c3935bbd6-logs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.598946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-public-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.599964 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.600290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-config-data-custom\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.600557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-internal-tls-certs\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.602181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8311529-9b2c-449c-8086-387c3935bbd6-combined-ca-bundle\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.609694 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9sz\" (UniqueName: \"kubernetes.io/projected/c8311529-9b2c-449c-8086-387c3935bbd6-kube-api-access-dw9sz\") pod \"barbican-api-6dc7fccf86-pqgwz\" (UID: \"c8311529-9b2c-449c-8086-387c3935bbd6\") " pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.820614 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.895696 4805 generic.go:334] "Generic (PLEG): container finished" podID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" containerID="f8193068ea49b80a759fcc4f57663e132a889f3763ab6c888e8bcb88ccc7044a" exitCode=0 Feb 17 00:44:15 crc kubenswrapper[4805]: I0217 00:44:15.896622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-r8kk4" event={"ID":"e89462a0-ccda-47cf-93e9-b8cd763c3b08","Type":"ContainerDied","Data":"f8193068ea49b80a759fcc4f57663e132a889f3763ab6c888e8bcb88ccc7044a"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.332454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc7fccf86-pqgwz"] Feb 17 00:44:16 crc kubenswrapper[4805]: W0217 00:44:16.334389 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8311529_9b2c_449c_8086_387c3935bbd6.slice/crio-de948fa8c754621d26385ba9826cd09060f8f06498d67e1f0d2c03e25e1930e5 WatchSource:0}: Error finding container de948fa8c754621d26385ba9826cd09060f8f06498d67e1f0d2c03e25e1930e5: Status 404 returned error can't find the container with id de948fa8c754621d26385ba9826cd09060f8f06498d67e1f0d2c03e25e1930e5 Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.909951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" event={"ID":"efcafb85-5938-470c-90a7-acfb359882af","Type":"ContainerStarted","Data":"dd07df57ef6d38b0a56381ab15b77a4d6e31e4264fa76fd8dbf2bb6837da4bfa"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.910282 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" event={"ID":"efcafb85-5938-470c-90a7-acfb359882af","Type":"ContainerStarted","Data":"dbacc4e5cc57ca8b6862a9b6af5466d3ce59dc40f34a3e2d6c70f999eb1380af"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.915353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc7fccf86-pqgwz" event={"ID":"c8311529-9b2c-449c-8086-387c3935bbd6","Type":"ContainerStarted","Data":"22f57a991f65a8445163422790ec7e33820db80a0c89e8646e7ca0bc0de9c362"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.915403 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc7fccf86-pqgwz" event={"ID":"c8311529-9b2c-449c-8086-387c3935bbd6","Type":"ContainerStarted","Data":"aadeaabee1ec11070f28bccf4a45f9f5b4234f4a3528cf6f76a986f957fd61d0"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.915413 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc7fccf86-pqgwz" event={"ID":"c8311529-9b2c-449c-8086-387c3935bbd6","Type":"ContainerStarted","Data":"de948fa8c754621d26385ba9826cd09060f8f06498d67e1f0d2c03e25e1930e5"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.915492 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.922290 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88c58c9-fwsz2" event={"ID":"911a5d99-5b74-4633-9d7a-40bee6bb01a4","Type":"ContainerStarted","Data":"7531a934010e70a21ea67e4a3cbd7d8c56c02ee648b76d7abece15e2bf4a57d1"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.922349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b88c58c9-fwsz2" event={"ID":"911a5d99-5b74-4633-9d7a-40bee6bb01a4","Type":"ContainerStarted","Data":"a734bb4c59071b6e00117b6697c2c09402fc4cf6d82ef5080916aab725cc782e"} Feb 17 00:44:16 crc kubenswrapper[4805]: I0217 00:44:16.970865 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-584fd88cbb-md2tp" podStartSLOduration=2.977079521 podStartE2EDuration="4.970844769s" podCreationTimestamp="2026-02-17 00:44:12 +0000 UTC" firstStartedPulling="2026-02-17 00:44:13.332924096 +0000 UTC m=+1279.348733494" lastFinishedPulling="2026-02-17 00:44:15.326689354 +0000 UTC m=+1281.342498742" observedRunningTime="2026-02-17 00:44:16.92925848 +0000 UTC m=+1282.945067878" watchObservedRunningTime="2026-02-17 00:44:16.970844769 +0000 UTC m=+1282.986654167" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.027880 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dc7fccf86-pqgwz" podStartSLOduration=2.027859003 podStartE2EDuration="2.027859003s" podCreationTimestamp="2026-02-17 00:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:16.968769651 +0000 UTC m=+1282.984579049" watchObservedRunningTime="2026-02-17 00:44:17.027859003 +0000 UTC m=+1283.043668391" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.053588 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b88c58c9-fwsz2" podStartSLOduration=2.945267417 podStartE2EDuration="5.053568766s" podCreationTimestamp="2026-02-17 00:44:12 +0000 UTC" firstStartedPulling="2026-02-17 00:44:13.225293929 +0000 UTC m=+1279.241103327" lastFinishedPulling="2026-02-17 00:44:15.333595278 +0000 UTC m=+1281.349404676" observedRunningTime="2026-02-17 00:44:17.009170617 +0000 UTC m=+1283.024980015" watchObservedRunningTime="2026-02-17 00:44:17.053568766 +0000 UTC m=+1283.069378174" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.646244 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758300 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppmgq\" (UniqueName: \"kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758535 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758587 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758669 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.758853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data\") pod \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\" (UID: \"e89462a0-ccda-47cf-93e9-b8cd763c3b08\") " Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.760618 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e89462a0-ccda-47cf-93e9-b8cd763c3b08-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.764697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.766211 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq" (OuterVolumeSpecName: "kube-api-access-ppmgq") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "kube-api-access-ppmgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.794459 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts" (OuterVolumeSpecName: "scripts") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.796342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.824403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data" (OuterVolumeSpecName: "config-data") pod "e89462a0-ccda-47cf-93e9-b8cd763c3b08" (UID: "e89462a0-ccda-47cf-93e9-b8cd763c3b08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.862517 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppmgq\" (UniqueName: \"kubernetes.io/projected/e89462a0-ccda-47cf-93e9-b8cd763c3b08-kube-api-access-ppmgq\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.862542 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.862553 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.862562 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.862571 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e89462a0-ccda-47cf-93e9-b8cd763c3b08-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.945980 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-r8kk4" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.946063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-r8kk4" event={"ID":"e89462a0-ccda-47cf-93e9-b8cd763c3b08","Type":"ContainerDied","Data":"c2ed1c4eb1678d2e73799ee550e7ccff1e1ba08a11f16fbe791b7a28ca78138c"} Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.946085 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ed1c4eb1678d2e73799ee550e7ccff1e1ba08a11f16fbe791b7a28ca78138c" Feb 17 00:44:17 crc kubenswrapper[4805]: I0217 00:44:17.946508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.142750 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:18 crc kubenswrapper[4805]: E0217 00:44:18.143554 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" containerName="cinder-db-sync" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.143572 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" containerName="cinder-db-sync" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.143885 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" containerName="cinder-db-sync" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.145070 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.146506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4sfjx" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.148012 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.148227 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.148533 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqphb\" (UniqueName: \"kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.173628 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.233693 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.254922 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.255192 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="dnsmasq-dns" containerID="cri-o://db5ff1bb39086877ae4a92ac1ac7e8d7892e9ce68fc645e91756001722c732d1" gracePeriod=10 Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqphb\" (UniqueName: \"kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.276387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.281106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.281653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.284049 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.291958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.292696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.309988 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqphb\" (UniqueName: \"kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb\") pod \"cinder-scheduler-0\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.324077 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.325728 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.341165 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.384967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.385034 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.385059 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.385148 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkl6t\" (UniqueName: \"kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.385170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.385209 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.454353 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.455970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.461685 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.468629 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.485300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486138 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h8v\" (UniqueName: \"kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486178 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkl6t\" (UniqueName: \"kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486449 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.486472 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.487747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.487747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.487974 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.488448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.491388 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.510431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkl6t\" (UniqueName: \"kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t\") pod \"dnsmasq-dns-5c9776ccc5-5jpc8\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.588502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.588841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.588905 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.588949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.588988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.589020 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.589087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8h8v\" (UniqueName: \"kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.589234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.589842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.593076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.593756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.594722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.594842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.611418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8h8v\" (UniqueName: \"kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v\") pod \"cinder-api-0\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.664167 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.673302 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.796180 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.933691 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.933947 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-699c4cfd75-pjgkq" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-api" containerID="cri-o://17a416a85a870e6a61efb6f2fc8cb11aa366cb308ea76d674d24abc230271a1b" gracePeriod=30 Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.936871 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-699c4cfd75-pjgkq" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-httpd" containerID="cri-o://9ffc5e90c21136ba6170e6476c4bcbdd636aed2614287db1aea84ae2e77dcb9b" gracePeriod=30 Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.955481 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.971298 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fd8fd677-jrz8c"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.973370 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.974006 4805 generic.go:334] "Generic (PLEG): container finished" podID="0403d039-d577-4378-932a-7908a75858fe" containerID="db5ff1bb39086877ae4a92ac1ac7e8d7892e9ce68fc645e91756001722c732d1" exitCode=0 Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.974935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerDied","Data":"db5ff1bb39086877ae4a92ac1ac7e8d7892e9ce68fc645e91756001722c732d1"} Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.984782 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd8fd677-jrz8c"] Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-public-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-internal-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-httpd-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlht\" (UniqueName: \"kubernetes.io/projected/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-kube-api-access-lvlht\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996824 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-combined-ca-bundle\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:18 crc kubenswrapper[4805]: I0217 00:44:18.996878 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-ovndb-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlht\" (UniqueName: \"kubernetes.io/projected/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-kube-api-access-lvlht\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099431 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-combined-ca-bundle\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-ovndb-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099523 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-public-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099582 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-internal-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.099677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-httpd-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.107075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.108070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-combined-ca-bundle\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.109971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-httpd-config\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.114141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-public-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.120089 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-ovndb-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.141023 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-internal-tls-certs\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.165391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlht\" (UniqueName: \"kubernetes.io/projected/c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac-kube-api-access-lvlht\") pod \"neutron-7fd8fd677-jrz8c\" (UID: \"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac\") " pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.295959 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.990747 4805 generic.go:334] "Generic (PLEG): container finished" podID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerID="9ffc5e90c21136ba6170e6476c4bcbdd636aed2614287db1aea84ae2e77dcb9b" exitCode=0 Feb 17 00:44:19 crc kubenswrapper[4805]: I0217 00:44:19.990992 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerDied","Data":"9ffc5e90c21136ba6170e6476c4bcbdd636aed2614287db1aea84ae2e77dcb9b"} Feb 17 00:44:20 crc kubenswrapper[4805]: I0217 00:44:20.494377 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:20 crc kubenswrapper[4805]: I0217 00:44:20.929092 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:21 crc kubenswrapper[4805]: I0217 00:44:21.937751 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:23 crc kubenswrapper[4805]: I0217 00:44:23.078856 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:44:23 crc kubenswrapper[4805]: I0217 00:44:23.078921 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:44:23 crc kubenswrapper[4805]: I0217 00:44:23.078970 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:44:23 crc kubenswrapper[4805]: I0217 00:44:23.079774 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:44:23 crc kubenswrapper[4805]: I0217 00:44:23.079834 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b" gracePeriod=600 Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.033058 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b" exitCode=0 Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.033123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b"} Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.033394 4805 scope.go:117] "RemoveContainer" containerID="9b39148eed4bf6c031ce94a8f02e78b29f27257693ebbfc8744d515a52505620" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.372562 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-699c4cfd75-pjgkq" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.180:9696/\": dial tcp 10.217.0.180:9696: connect: connection refused" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.583820 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.616735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.616797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.616876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.617006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44svg\" (UniqueName: \"kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.617073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.617107 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb\") pod \"0403d039-d577-4378-932a-7908a75858fe\" (UID: \"0403d039-d577-4378-932a-7908a75858fe\") " Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.632307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg" (OuterVolumeSpecName: "kube-api-access-44svg") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "kube-api-access-44svg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.684413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config" (OuterVolumeSpecName: "config") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.689375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.699486 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.705698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.720436 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.720464 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.720474 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.720484 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.720493 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44svg\" (UniqueName: \"kubernetes.io/projected/0403d039-d577-4378-932a-7908a75858fe-kube-api-access-44svg\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.721282 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0403d039-d577-4378-932a-7908a75858fe" (UID: "0403d039-d577-4378-932a-7908a75858fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:24 crc kubenswrapper[4805]: I0217 00:44:24.821686 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0403d039-d577-4378-932a-7908a75858fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.046194 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" event={"ID":"0403d039-d577-4378-932a-7908a75858fe","Type":"ContainerDied","Data":"6b4031d80b90e6998f6bcbedd0b7ad1dfa91fd0c8a44059fcf973d29e849a423"} Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.046232 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.081456 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.094308 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-z2dtt"] Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.313495 4805 scope.go:117] "RemoveContainer" containerID="db5ff1bb39086877ae4a92ac1ac7e8d7892e9ce68fc645e91756001722c732d1" Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.374728 4805 scope.go:117] "RemoveContainer" containerID="29f63ef586bc762b2afe3033b21207dfb1dfaafa66ee01053e5c97853dd1cf1a" Feb 17 00:44:25 crc kubenswrapper[4805]: E0217 00:44:25.738287 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.914441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.961135 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:25 crc kubenswrapper[4805]: I0217 00:44:25.975155 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd8fd677-jrz8c"] Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.070893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" event={"ID":"ab376c9f-5da0-4d6f-aca4-16c20967016d","Type":"ContainerStarted","Data":"a50ffc0bfe61f3513b27f10593b704839452679e0f6c240b7aa19807c277a760"} Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.071753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd8fd677-jrz8c" event={"ID":"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac","Type":"ContainerStarted","Data":"fa89194b5dfa2a13c8a5d526e3d8aa8f2d18227b2a13319bc15bdfc05aeb05fe"} Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.073126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerStarted","Data":"06eb10dde969417885e5834da33c50879f75bf6f5efb18037bcce2f19c6eb570"} Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.077699 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.089379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b"} Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.098380 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="ceilometer-notification-agent" containerID="cri-o://b0f8890a90cf6fcb2ec4f0157f3bf038f6a5344d03eb2432da5b86681671390b" gracePeriod=30 Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.098657 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerStarted","Data":"55942b46a4627740a3859d0545e05dbc2723a7ee82f9d7ea3b609857471d69ea"} Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.098695 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.098713 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="sg-core" containerID="cri-o://6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5" gracePeriod=30 Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.098719 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="proxy-httpd" containerID="cri-o://55942b46a4627740a3859d0545e05dbc2723a7ee82f9d7ea3b609857471d69ea" gracePeriod=30 Feb 17 00:44:26 crc kubenswrapper[4805]: E0217 00:44:26.643260 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1916fe_f237_4dd1_8af5_f18a52248311.slice/crio-conmon-6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:44:26 crc kubenswrapper[4805]: I0217 00:44:26.797991 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0403d039-d577-4378-932a-7908a75858fe" path="/var/lib/kubelet/pods/0403d039-d577-4378-932a-7908a75858fe/volumes" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169132 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerID="55942b46a4627740a3859d0545e05dbc2723a7ee82f9d7ea3b609857471d69ea" exitCode=0 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169701 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerID="6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5" exitCode=2 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169713 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerID="b0f8890a90cf6fcb2ec4f0157f3bf038f6a5344d03eb2432da5b86681671390b" exitCode=0 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerDied","Data":"55942b46a4627740a3859d0545e05dbc2723a7ee82f9d7ea3b609857471d69ea"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerDied","Data":"6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.169812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerDied","Data":"b0f8890a90cf6fcb2ec4f0157f3bf038f6a5344d03eb2432da5b86681671390b"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.179711 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerID="a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc" exitCode=0 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.180384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" event={"ID":"ab376c9f-5da0-4d6f-aca4-16c20967016d","Type":"ContainerDied","Data":"a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.186680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd8fd677-jrz8c" event={"ID":"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac","Type":"ContainerStarted","Data":"4e0e951cab1136e429df035cfa4a887fe68fda47e7605b63d990c4f56b3d1a60"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.187276 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.198284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerStarted","Data":"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.203360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerStarted","Data":"5d389e380d3cdf667e72a4f747f9c3bcbc5be83f9ab3f65b5ccd21d87cce0cfb"} Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.241796 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fd8fd677-jrz8c" podStartSLOduration=9.241779952 podStartE2EDuration="9.241779952s" podCreationTimestamp="2026-02-17 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:27.216825331 +0000 UTC m=+1293.232634729" watchObservedRunningTime="2026-02-17 00:44:27.241779952 +0000 UTC m=+1293.257589350" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.245856 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385721 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385802 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385824 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lckp4\" (UniqueName: \"kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385916 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.385933 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd\") pod \"ab1916fe-f237-4dd1-8af5-f18a52248311\" (UID: \"ab1916fe-f237-4dd1-8af5-f18a52248311\") " Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.388568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.392084 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4" (OuterVolumeSpecName: "kube-api-access-lckp4") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "kube-api-access-lckp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.392468 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.402656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts" (OuterVolumeSpecName: "scripts") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.429815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.487632 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.487943 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.487952 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lckp4\" (UniqueName: \"kubernetes.io/projected/ab1916fe-f237-4dd1-8af5-f18a52248311-kube-api-access-lckp4\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.487963 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.487971 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab1916fe-f237-4dd1-8af5-f18a52248311-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.550699 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.580598 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data" (OuterVolumeSpecName: "config-data") pod "ab1916fe-f237-4dd1-8af5-f18a52248311" (UID: "ab1916fe-f237-4dd1-8af5-f18a52248311"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.589430 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.589467 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab1916fe-f237-4dd1-8af5-f18a52248311-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.607638 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.619884 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc7fccf86-pqgwz" Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.721628 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.721883 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7456cd9cc6-8fjxw" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api-log" containerID="cri-o://353fe59021006bf5b7f00d6928a6c83562ba99af62a211d7113d8b17088dfed7" gracePeriod=30 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.722021 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7456cd9cc6-8fjxw" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api" containerID="cri-o://dac8e926e2499467d06ebe904129fcf110049a1f0f32c11de97cfa4942984957" gracePeriod=30 Feb 17 00:44:27 crc kubenswrapper[4805]: I0217 00:44:27.991031 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-z2dtt" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: i/o timeout" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.229059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerStarted","Data":"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.229490 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.229387 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api-log" containerID="cri-o://359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" gracePeriod=30 Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.229189 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api" containerID="cri-o://ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" gracePeriod=30 Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.232526 4805 generic.go:334] "Generic (PLEG): container finished" podID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerID="353fe59021006bf5b7f00d6928a6c83562ba99af62a211d7113d8b17088dfed7" exitCode=143 Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.232592 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerDied","Data":"353fe59021006bf5b7f00d6928a6c83562ba99af62a211d7113d8b17088dfed7"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.234923 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerStarted","Data":"46aa34541a96b835e06a82746e3ba39e20bf7a862c025391cc45337700c3cb07"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.251842 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.251824637 podStartE2EDuration="10.251824637s" podCreationTimestamp="2026-02-17 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:28.24766887 +0000 UTC m=+1294.263478268" watchObservedRunningTime="2026-02-17 00:44:28.251824637 +0000 UTC m=+1294.267634035" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.253728 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab1916fe-f237-4dd1-8af5-f18a52248311","Type":"ContainerDied","Data":"6b6f194c5248c5e8d48b368898c279216a2b050b1c4bb69ba6c2b656ee842960"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.253777 4805 scope.go:117] "RemoveContainer" containerID="55942b46a4627740a3859d0545e05dbc2723a7ee82f9d7ea3b609857471d69ea" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.253967 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.288927 4805 generic.go:334] "Generic (PLEG): container finished" podID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerID="17a416a85a870e6a61efb6f2fc8cb11aa366cb308ea76d674d24abc230271a1b" exitCode=0 Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.289053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerDied","Data":"17a416a85a870e6a61efb6f2fc8cb11aa366cb308ea76d674d24abc230271a1b"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.315554 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" event={"ID":"ab376c9f-5da0-4d6f-aca4-16c20967016d","Type":"ContainerStarted","Data":"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.316715 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.325774 4805 scope.go:117] "RemoveContainer" containerID="6eaf75057a0c587e7342058e5dac139ac53cf1defd7285a151e4ff2f5eb666c5" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.353150 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd8fd677-jrz8c" event={"ID":"c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac","Type":"ContainerStarted","Data":"c6918539bc66da988fe43771c8b81e3dfda9f1e7a9c981c66b8cdeebd18428d2"} Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.363009 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.380963 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.406921 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:28 crc kubenswrapper[4805]: E0217 00:44:28.407362 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="ceilometer-notification-agent" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407374 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="ceilometer-notification-agent" Feb 17 00:44:28 crc kubenswrapper[4805]: E0217 00:44:28.407396 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="dnsmasq-dns" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407402 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="dnsmasq-dns" Feb 17 00:44:28 crc kubenswrapper[4805]: E0217 00:44:28.407413 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="sg-core" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407419 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="sg-core" Feb 17 00:44:28 crc kubenswrapper[4805]: E0217 00:44:28.407442 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="proxy-httpd" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407447 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="proxy-httpd" Feb 17 00:44:28 crc kubenswrapper[4805]: E0217 00:44:28.407465 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="init" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407471 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="init" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407654 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0403d039-d577-4378-932a-7908a75858fe" containerName="dnsmasq-dns" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407666 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="proxy-httpd" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407687 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="ceilometer-notification-agent" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.407698 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" containerName="sg-core" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.409530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.415717 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.415764 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.421689 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" podStartSLOduration=10.421667903 podStartE2EDuration="10.421667903s" podCreationTimestamp="2026-02-17 00:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:28.379873418 +0000 UTC m=+1294.395682816" watchObservedRunningTime="2026-02-17 00:44:28.421667903 +0000 UTC m=+1294.437477301" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.456230 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.466044 4805 scope.go:117] "RemoveContainer" containerID="b0f8890a90cf6fcb2ec4f0157f3bf038f6a5344d03eb2432da5b86681671390b" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.510917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.510988 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.511056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhmw8\" (UniqueName: \"kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.511103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.511121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.511159 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.511242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.613405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.613780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.613925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.613999 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.614037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.614051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.614173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.614289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhmw8\" (UniqueName: \"kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.615486 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.620151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.622454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.627517 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.632884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhmw8\" (UniqueName: \"kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.639045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.733436 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.799682 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1916fe-f237-4dd1-8af5-f18a52248311" path="/var/lib/kubelet/pods/ab1916fe-f237-4dd1-8af5-f18a52248311/volumes" Feb 17 00:44:28 crc kubenswrapper[4805]: I0217 00:44:28.895279 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024292 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024393 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024525 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkdt\" (UniqueName: \"kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.024737 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config\") pod \"47d4f059-d277-419c-8a13-ed2a1a89a73c\" (UID: \"47d4f059-d277-419c-8a13-ed2a1a89a73c\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.034459 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.035138 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt" (OuterVolumeSpecName: "kube-api-access-gqkdt") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "kube-api-access-gqkdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.112359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.127985 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.128015 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkdt\" (UniqueName: \"kubernetes.io/projected/47d4f059-d277-419c-8a13-ed2a1a89a73c-kube-api-access-gqkdt\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.128026 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.191039 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config" (OuterVolumeSpecName: "config") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.193627 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.211495 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.221698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.231408 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.231442 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.231453 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.242446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "47d4f059-d277-419c-8a13-ed2a1a89a73c" (UID: "47d4f059-d277-419c-8a13-ed2a1a89a73c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.324424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333077 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333177 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333194 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333271 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8h8v\" (UniqueName: \"kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333383 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs\") pod \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\" (UID: \"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1\") " Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.333901 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d4f059-d277-419c-8a13-ed2a1a89a73c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.334202 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs" (OuterVolumeSpecName: "logs") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.336429 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.339357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts" (OuterVolumeSpecName: "scripts") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.340536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v" (OuterVolumeSpecName: "kube-api-access-x8h8v") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "kube-api-access-x8h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.340815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.367014 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699c4cfd75-pjgkq" event={"ID":"47d4f059-d277-419c-8a13-ed2a1a89a73c","Type":"ContainerDied","Data":"cef012a50ce52c88cb178f6dc3d87d0cccada6811336ee178a02223213badd1e"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.367078 4805 scope.go:117] "RemoveContainer" containerID="9ffc5e90c21136ba6170e6476c4bcbdd636aed2614287db1aea84ae2e77dcb9b" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.367178 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699c4cfd75-pjgkq" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372800 4805 generic.go:334] "Generic (PLEG): container finished" podID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerID="ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" exitCode=0 Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372825 4805 generic.go:334] "Generic (PLEG): container finished" podID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerID="359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" exitCode=143 Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerDied","Data":"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372886 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerDied","Data":"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8001df36-cfd7-44bd-8e5a-dfa8c650b2d1","Type":"ContainerDied","Data":"06eb10dde969417885e5834da33c50879f75bf6f5efb18037bcce2f19c6eb570"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.372944 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.374845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerStarted","Data":"234858edd981f951703723a12523ed2db827b289fecf18edf5b33b87e12ac85d"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.377443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.388727 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerStarted","Data":"0c5bffb40defc07d19f78cd1f76990d0f326a5620fe2c484c86199d328c36f56"} Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.398788 4805 scope.go:117] "RemoveContainer" containerID="17a416a85a870e6a61efb6f2fc8cb11aa366cb308ea76d674d24abc230271a1b" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.412319 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.575223181 podStartE2EDuration="11.412302931s" podCreationTimestamp="2026-02-17 00:44:18 +0000 UTC" firstStartedPulling="2026-02-17 00:44:26.096833986 +0000 UTC m=+1292.112643384" lastFinishedPulling="2026-02-17 00:44:26.933913726 +0000 UTC m=+1292.949723134" observedRunningTime="2026-02-17 00:44:29.409843192 +0000 UTC m=+1295.425652590" watchObservedRunningTime="2026-02-17 00:44:29.412302931 +0000 UTC m=+1295.428112329" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.432944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data" (OuterVolumeSpecName: "config-data") pod "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" (UID: "8001df36-cfd7-44bd-8e5a-dfa8c650b2d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.434975 4805 scope.go:117] "RemoveContainer" containerID="ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435807 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435827 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435836 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435846 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435854 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435862 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.435874 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8h8v\" (UniqueName: \"kubernetes.io/projected/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1-kube-api-access-x8h8v\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.448432 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.459690 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-699c4cfd75-pjgkq"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.463497 4805 scope.go:117] "RemoveContainer" containerID="359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.561709 4805 scope.go:117] "RemoveContainer" containerID="ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.562316 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4\": container with ID starting with ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4 not found: ID does not exist" containerID="ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.562373 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4"} err="failed to get container status \"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4\": rpc error: code = NotFound desc = could not find container \"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4\": container with ID starting with ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4 not found: ID does not exist" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.562398 4805 scope.go:117] "RemoveContainer" containerID="359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.562780 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba\": container with ID starting with 359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba not found: ID does not exist" containerID="359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.562831 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba"} err="failed to get container status \"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba\": rpc error: code = NotFound desc = could not find container \"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba\": container with ID starting with 359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba not found: ID does not exist" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.562865 4805 scope.go:117] "RemoveContainer" containerID="ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.563156 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4"} err="failed to get container status \"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4\": rpc error: code = NotFound desc = could not find container \"ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4\": container with ID starting with ef3360a806f10d6c4515e7b1853ae6c04c21158013ae94b7774771d75c070ae4 not found: ID does not exist" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.563181 4805 scope.go:117] "RemoveContainer" containerID="359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.564228 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba"} err="failed to get container status \"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba\": rpc error: code = NotFound desc = could not find container \"359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba\": container with ID starting with 359b71c6d87517613fc450621d313c4ad8e91a6d5a744ec8bd3f60406eb7eaba not found: ID does not exist" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.741487 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.755113 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.790818 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.791174 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791190 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api" Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.791207 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-api" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791213 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-api" Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.791230 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api-log" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791236 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api-log" Feb 17 00:44:29 crc kubenswrapper[4805]: E0217 00:44:29.791256 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-httpd" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791262 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-httpd" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791435 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api-log" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791453 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-api" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791471 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" containerName="neutron-httpd" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.791484 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" containerName="cinder-api" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.792385 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.797830 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.798040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.798574 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.810011 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-logs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-scripts\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947430 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947541 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947589 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tqv6\" (UniqueName: \"kubernetes.io/projected/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-kube-api-access-2tqv6\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:29 crc kubenswrapper[4805]: I0217 00:44:29.947650 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049457 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tqv6\" (UniqueName: \"kubernetes.io/projected/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-kube-api-access-2tqv6\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-logs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-scripts\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.049655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.050571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.051769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-logs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.058477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.067037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.071880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.076915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tqv6\" (UniqueName: \"kubernetes.io/projected/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-kube-api-access-2tqv6\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.077246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-config-data\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.077433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.077714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265-scripts\") pod \"cinder-api-0\" (UID: \"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265\") " pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.117243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.408379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerStarted","Data":"1f6dad299bcacac8b412475e8719caf523fce2bdef6f36fbb663c78df17440e5"} Feb 17 00:44:30 crc kubenswrapper[4805]: W0217 00:44:30.681837 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b4d0c3e_fef3_4f34_b837_6fc9c4ecf265.slice/crio-281ea5f15e80dfae99df98b9b81a5b470ad6794f60ba0c52cd8381903fa15d6a WatchSource:0}: Error finding container 281ea5f15e80dfae99df98b9b81a5b470ad6794f60ba0c52cd8381903fa15d6a: Status 404 returned error can't find the container with id 281ea5f15e80dfae99df98b9b81a5b470ad6794f60ba0c52cd8381903fa15d6a Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.694943 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.810432 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47d4f059-d277-419c-8a13-ed2a1a89a73c" path="/var/lib/kubelet/pods/47d4f059-d277-419c-8a13-ed2a1a89a73c/volumes" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.811287 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8001df36-cfd7-44bd-8e5a-dfa8c650b2d1" path="/var/lib/kubelet/pods/8001df36-cfd7-44bd-8e5a-dfa8c650b2d1/volumes" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.941817 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7456cd9cc6-8fjxw" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:60210->10.217.0.186:9311: read: connection reset by peer" Feb 17 00:44:30 crc kubenswrapper[4805]: I0217 00:44:30.942555 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7456cd9cc6-8fjxw" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:60202->10.217.0.186:9311: read: connection reset by peer" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.422847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerStarted","Data":"81b332458e5446503ebdce7cd7a8399965d647dcf52d2627bab89cb0d58d5fbc"} Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.426445 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265","Type":"ContainerStarted","Data":"281ea5f15e80dfae99df98b9b81a5b470ad6794f60ba0c52cd8381903fa15d6a"} Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.429967 4805 generic.go:334] "Generic (PLEG): container finished" podID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerID="dac8e926e2499467d06ebe904129fcf110049a1f0f32c11de97cfa4942984957" exitCode=0 Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.430022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerDied","Data":"dac8e926e2499467d06ebe904129fcf110049a1f0f32c11de97cfa4942984957"} Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.579798 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687214 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs\") pod \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687281 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom\") pod \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687340 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data\") pod \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle\") pod \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnj2j\" (UniqueName: \"kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j\") pod \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\" (UID: \"9c24df5f-e1a2-468b-a86a-cfccf396e5a9\") " Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.687848 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs" (OuterVolumeSpecName: "logs") pod "9c24df5f-e1a2-468b-a86a-cfccf396e5a9" (UID: "9c24df5f-e1a2-468b-a86a-cfccf396e5a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.688008 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.706854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c24df5f-e1a2-468b-a86a-cfccf396e5a9" (UID: "9c24df5f-e1a2-468b-a86a-cfccf396e5a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.730741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j" (OuterVolumeSpecName: "kube-api-access-nnj2j") pod "9c24df5f-e1a2-468b-a86a-cfccf396e5a9" (UID: "9c24df5f-e1a2-468b-a86a-cfccf396e5a9"). InnerVolumeSpecName "kube-api-access-nnj2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.776087 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c24df5f-e1a2-468b-a86a-cfccf396e5a9" (UID: "9c24df5f-e1a2-468b-a86a-cfccf396e5a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.790585 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.790611 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.790622 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnj2j\" (UniqueName: \"kubernetes.io/projected/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-kube-api-access-nnj2j\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.821666 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data" (OuterVolumeSpecName: "config-data") pod "9c24df5f-e1a2-468b-a86a-cfccf396e5a9" (UID: "9c24df5f-e1a2-468b-a86a-cfccf396e5a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:31 crc kubenswrapper[4805]: I0217 00:44:31.892813 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c24df5f-e1a2-468b-a86a-cfccf396e5a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.441191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerStarted","Data":"3943583201b08de89e4884e80dbd323b6c85c81b065436353eadffc33e665dac"} Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.443505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265","Type":"ContainerStarted","Data":"0f0ed67f54a0d5e6af66d52597a4137c8de594b47a03421a18db46abd7644cc4"} Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.443540 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265","Type":"ContainerStarted","Data":"7c72230c523d2296c8d2c3e9e11aa4a073e34963782305575cc9e8791638969a"} Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.444720 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.450463 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7456cd9cc6-8fjxw" event={"ID":"9c24df5f-e1a2-468b-a86a-cfccf396e5a9","Type":"ContainerDied","Data":"8ec702b697b928e4607c72b1d12b8f19361046ffa1829bf181f1b7667f2a51a8"} Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.450513 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7456cd9cc6-8fjxw" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.450517 4805 scope.go:117] "RemoveContainer" containerID="dac8e926e2499467d06ebe904129fcf110049a1f0f32c11de97cfa4942984957" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.474975 4805 scope.go:117] "RemoveContainer" containerID="353fe59021006bf5b7f00d6928a6c83562ba99af62a211d7113d8b17088dfed7" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.494889 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.494872297 podStartE2EDuration="3.494872297s" podCreationTimestamp="2026-02-17 00:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:32.485461373 +0000 UTC m=+1298.501270771" watchObservedRunningTime="2026-02-17 00:44:32.494872297 +0000 UTC m=+1298.510681685" Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.577186 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.588442 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7456cd9cc6-8fjxw"] Feb 17 00:44:32 crc kubenswrapper[4805]: I0217 00:44:32.795844 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" path="/var/lib/kubelet/pods/9c24df5f-e1a2-468b-a86a-cfccf396e5a9/volumes" Feb 17 00:44:33 crc kubenswrapper[4805]: I0217 00:44:33.485728 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 00:44:33 crc kubenswrapper[4805]: I0217 00:44:33.678483 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:44:33 crc kubenswrapper[4805]: I0217 00:44:33.777200 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 00:44:33 crc kubenswrapper[4805]: I0217 00:44:33.831104 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:44:33 crc kubenswrapper[4805]: I0217 00:44:33.831754 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="dnsmasq-dns" containerID="cri-o://381d3ddac6be45bf607ade69403ada5024b6709e04a0738e354c5548ef642007" gracePeriod=10 Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.474132 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerStarted","Data":"3b3cc504e00c7dd6898d9db2d98625fd13cc1eaad53ccc842f74e4f519ea19eb"} Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.474646 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.476561 4805 generic.go:334] "Generic (PLEG): container finished" podID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerID="381d3ddac6be45bf607ade69403ada5024b6709e04a0738e354c5548ef642007" exitCode=0 Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.476645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" event={"ID":"7c71d620-0f06-4b24-b647-98e1ea0004b1","Type":"ContainerDied","Data":"381d3ddac6be45bf607ade69403ada5024b6709e04a0738e354c5548ef642007"} Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.500895 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.20243619 podStartE2EDuration="6.500877969s" podCreationTimestamp="2026-02-17 00:44:28 +0000 UTC" firstStartedPulling="2026-02-17 00:44:29.325111149 +0000 UTC m=+1295.340920547" lastFinishedPulling="2026-02-17 00:44:33.623552928 +0000 UTC m=+1299.639362326" observedRunningTime="2026-02-17 00:44:34.498544054 +0000 UTC m=+1300.514353452" watchObservedRunningTime="2026-02-17 00:44:34.500877969 +0000 UTC m=+1300.516687367" Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.537791 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.677243 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7d5b44676f-vbgmb" Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.685267 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:34 crc kubenswrapper[4805]: I0217 00:44:34.729508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65599f5544-8m95b" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.120491 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180337 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180425 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180488 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9xh\" (UniqueName: \"kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.180671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb\") pod \"7c71d620-0f06-4b24-b647-98e1ea0004b1\" (UID: \"7c71d620-0f06-4b24-b647-98e1ea0004b1\") " Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.186637 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh" (OuterVolumeSpecName: "kube-api-access-jb9xh") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "kube-api-access-jb9xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.259334 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.261894 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.272368 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.279958 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.285705 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.285733 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.285744 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.285753 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9xh\" (UniqueName: \"kubernetes.io/projected/7c71d620-0f06-4b24-b647-98e1ea0004b1-kube-api-access-jb9xh\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.285763 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.287104 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config" (OuterVolumeSpecName: "config") pod "7c71d620-0f06-4b24-b647-98e1ea0004b1" (UID: "7c71d620-0f06-4b24-b647-98e1ea0004b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.387542 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c71d620-0f06-4b24-b647-98e1ea0004b1-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.488561 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.488850 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="cinder-scheduler" containerID="cri-o://46aa34541a96b835e06a82746e3ba39e20bf7a862c025391cc45337700c3cb07" gracePeriod=30 Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.488932 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="probe" containerID="cri-o://0c5bffb40defc07d19f78cd1f76990d0f326a5620fe2c484c86199d328c36f56" gracePeriod=30 Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.489184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-vjs6q" event={"ID":"7c71d620-0f06-4b24-b647-98e1ea0004b1","Type":"ContainerDied","Data":"1269fca751f20e3d98f6651efa43e40514ce6609f7a8288ac0e7f0da3e0e9fd4"} Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.489284 4805 scope.go:117] "RemoveContainer" containerID="381d3ddac6be45bf607ade69403ada5024b6709e04a0738e354c5548ef642007" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.520935 4805 scope.go:117] "RemoveContainer" containerID="412f80eadfd096862773228c45a0d8943aa3f8e2994f5b5999c7899a0024cba5" Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.530570 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:44:35 crc kubenswrapper[4805]: I0217 00:44:35.542568 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-vjs6q"] Feb 17 00:44:36 crc kubenswrapper[4805]: I0217 00:44:36.503174 4805 generic.go:334] "Generic (PLEG): container finished" podID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerID="0c5bffb40defc07d19f78cd1f76990d0f326a5620fe2c484c86199d328c36f56" exitCode=0 Feb 17 00:44:36 crc kubenswrapper[4805]: I0217 00:44:36.503188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerDied","Data":"0c5bffb40defc07d19f78cd1f76990d0f326a5620fe2c484c86199d328c36f56"} Feb 17 00:44:36 crc kubenswrapper[4805]: I0217 00:44:36.794878 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" path="/var/lib/kubelet/pods/7c71d620-0f06-4b24-b647-98e1ea0004b1/volumes" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.514202 4805 generic.go:334] "Generic (PLEG): container finished" podID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerID="46aa34541a96b835e06a82746e3ba39e20bf7a862c025391cc45337700c3cb07" exitCode=0 Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.514528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerDied","Data":"46aa34541a96b835e06a82746e3ba39e20bf7a862c025391cc45337700c3cb07"} Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.663512 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.739963 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740037 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740111 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqphb\" (UniqueName: \"kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle\") pod \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\" (UID: \"b44da2ac-823e-47f5-83dd-5fd0fc93f874\") " Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.740699 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b44da2ac-823e-47f5-83dd-5fd0fc93f874-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.745842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.746540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts" (OuterVolumeSpecName: "scripts") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.755462 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb" (OuterVolumeSpecName: "kube-api-access-rqphb") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "kube-api-access-rqphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.813352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.843534 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.843577 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.843594 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.843607 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqphb\" (UniqueName: \"kubernetes.io/projected/b44da2ac-823e-47f5-83dd-5fd0fc93f874-kube-api-access-rqphb\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.856423 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data" (OuterVolumeSpecName: "config-data") pod "b44da2ac-823e-47f5-83dd-5fd0fc93f874" (UID: "b44da2ac-823e-47f5-83dd-5fd0fc93f874"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:37 crc kubenswrapper[4805]: I0217 00:44:37.945581 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b44da2ac-823e-47f5-83dd-5fd0fc93f874-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.032637 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033016 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="probe" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033033 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="probe" Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033043 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033049 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api" Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033066 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api-log" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033072 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api-log" Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033084 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="dnsmasq-dns" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033089 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="dnsmasq-dns" Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033098 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="init" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033104 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="init" Feb 17 00:44:38 crc kubenswrapper[4805]: E0217 00:44:38.033120 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="cinder-scheduler" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033126 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="cinder-scheduler" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033290 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="cinder-scheduler" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033309 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033336 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c24df5f-e1a2-468b-a86a-cfccf396e5a9" containerName="barbican-api-log" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033344 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" containerName="probe" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033358 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c71d620-0f06-4b24-b647-98e1ea0004b1" containerName="dnsmasq-dns" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.033942 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.038100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.038145 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.038399 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-89z4m" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.047647 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.149317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.149381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlkq\" (UniqueName: \"kubernetes.io/projected/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-kube-api-access-mrlkq\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.149406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.149829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.251629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.251691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrlkq\" (UniqueName: \"kubernetes.io/projected/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-kube-api-access-mrlkq\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.251726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.251869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.252684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.255574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-openstack-config-secret\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.256877 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.270075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrlkq\" (UniqueName: \"kubernetes.io/projected/3d04a0a0-da8e-4d58-b70c-b0e60bd9660c-kube-api-access-mrlkq\") pod \"openstackclient\" (UID: \"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c\") " pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.432561 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.553412 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b44da2ac-823e-47f5-83dd-5fd0fc93f874","Type":"ContainerDied","Data":"5d389e380d3cdf667e72a4f747f9c3bcbc5be83f9ab3f65b5ccd21d87cce0cfb"} Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.553504 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.553898 4805 scope.go:117] "RemoveContainer" containerID="0c5bffb40defc07d19f78cd1f76990d0f326a5620fe2c484c86199d328c36f56" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.635160 4805 scope.go:117] "RemoveContainer" containerID="46aa34541a96b835e06a82746e3ba39e20bf7a862c025391cc45337700c3cb07" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.639012 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.654413 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.672028 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.690710 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.704904 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.730465 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.761841 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.761934 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.761973 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.762005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.762082 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d72m7\" (UniqueName: \"kubernetes.io/projected/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-kube-api-access-d72m7\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.762104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.805983 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44da2ac-823e-47f5-83dd-5fd0fc93f874" path="/var/lib/kubelet/pods/b44da2ac-823e-47f5-83dd-5fd0fc93f874/volumes" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.863667 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d72m7\" (UniqueName: \"kubernetes.io/projected/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-kube-api-access-d72m7\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.864947 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.865812 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.866021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.866131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.866210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.866777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.871728 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.872116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.872375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.875656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.883030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d72m7\" (UniqueName: \"kubernetes.io/projected/3849aaa3-5b53-484e-9f8d-36eef09cb1b4-kube-api-access-d72m7\") pod \"cinder-scheduler-0\" (UID: \"3849aaa3-5b53-484e-9f8d-36eef09cb1b4\") " pod="openstack/cinder-scheduler-0" Feb 17 00:44:38 crc kubenswrapper[4805]: I0217 00:44:38.959486 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 00:44:39 crc kubenswrapper[4805]: I0217 00:44:39.016268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 00:44:39 crc kubenswrapper[4805]: I0217 00:44:39.568813 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 00:44:39 crc kubenswrapper[4805]: I0217 00:44:39.588437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c","Type":"ContainerStarted","Data":"cfed98049bf3455d4d9d907a30a93693f7d2c147b8e74a8000c8b1c1378010f2"} Feb 17 00:44:40 crc kubenswrapper[4805]: I0217 00:44:40.603203 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3849aaa3-5b53-484e-9f8d-36eef09cb1b4","Type":"ContainerStarted","Data":"fb15b1f4aaf7cca935fa1e05372db3146d375a4fd20cd6f1c7c7a4d0ef306444"} Feb 17 00:44:41 crc kubenswrapper[4805]: I0217 00:44:41.617486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3849aaa3-5b53-484e-9f8d-36eef09cb1b4","Type":"ContainerStarted","Data":"1c43d0d354e7346cf356cb858e86e0f4b02bcb589efb7e081e3205d5d5451aea"} Feb 17 00:44:41 crc kubenswrapper[4805]: I0217 00:44:41.619415 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3849aaa3-5b53-484e-9f8d-36eef09cb1b4","Type":"ContainerStarted","Data":"e10f1875812070ca76fbed25ed652632bd1362513c90b05f791f1ebaf4a7435f"} Feb 17 00:44:41 crc kubenswrapper[4805]: I0217 00:44:41.645199 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.645172157 podStartE2EDuration="3.645172157s" podCreationTimestamp="2026-02-17 00:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:41.633408746 +0000 UTC m=+1307.649218144" watchObservedRunningTime="2026-02-17 00:44:41.645172157 +0000 UTC m=+1307.660981565" Feb 17 00:44:41 crc kubenswrapper[4805]: I0217 00:44:41.964032 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.882698 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.883457 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="sg-core" containerID="cri-o://3943583201b08de89e4884e80dbd323b6c85c81b065436353eadffc33e665dac" gracePeriod=30 Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.883500 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="proxy-httpd" containerID="cri-o://3b3cc504e00c7dd6898d9db2d98625fd13cc1eaad53ccc842f74e4f519ea19eb" gracePeriod=30 Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.883545 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-notification-agent" containerID="cri-o://81b332458e5446503ebdce7cd7a8399965d647dcf52d2627bab89cb0d58d5fbc" gracePeriod=30 Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.883433 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-central-agent" containerID="cri-o://1f6dad299bcacac8b412475e8719caf523fce2bdef6f36fbb663c78df17440e5" gracePeriod=30 Feb 17 00:44:43 crc kubenswrapper[4805]: I0217 00:44:43.889411 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.019556 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650106 4805 generic.go:334] "Generic (PLEG): container finished" podID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerID="3b3cc504e00c7dd6898d9db2d98625fd13cc1eaad53ccc842f74e4f519ea19eb" exitCode=0 Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650500 4805 generic.go:334] "Generic (PLEG): container finished" podID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerID="3943583201b08de89e4884e80dbd323b6c85c81b065436353eadffc33e665dac" exitCode=2 Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650515 4805 generic.go:334] "Generic (PLEG): container finished" podID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerID="1f6dad299bcacac8b412475e8719caf523fce2bdef6f36fbb663c78df17440e5" exitCode=0 Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerDied","Data":"3b3cc504e00c7dd6898d9db2d98625fd13cc1eaad53ccc842f74e4f519ea19eb"} Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650557 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerDied","Data":"3943583201b08de89e4884e80dbd323b6c85c81b065436353eadffc33e665dac"} Feb 17 00:44:44 crc kubenswrapper[4805]: I0217 00:44:44.650573 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerDied","Data":"1f6dad299bcacac8b412475e8719caf523fce2bdef6f36fbb663c78df17440e5"} Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.395963 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7b9959496c-vdvnd"] Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.397904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.401427 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.405497 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.406211 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.410592 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b9959496c-vdvnd"] Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-log-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-etc-swift\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506498 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-run-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-config-data\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-combined-ca-bundle\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-public-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwm9\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-kube-api-access-7xwm9\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.506662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-internal-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.608509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-config-data\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.608784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-combined-ca-bundle\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.608875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-public-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.608978 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwm9\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-kube-api-access-7xwm9\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.609070 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-internal-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.609156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-log-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.609228 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-etc-swift\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.609436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-run-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.610351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-run-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.619093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-internal-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.621752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f289780f-6025-465b-859f-e951ffd9e8e5-log-httpd\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.627490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-combined-ca-bundle\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.627716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-config-data\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.631093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f289780f-6025-465b-859f-e951ffd9e8e5-public-tls-certs\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.632201 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-etc-swift\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.641304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwm9\" (UniqueName: \"kubernetes.io/projected/f289780f-6025-465b-859f-e951ffd9e8e5-kube-api-access-7xwm9\") pod \"swift-proxy-7b9959496c-vdvnd\" (UID: \"f289780f-6025-465b-859f-e951ffd9e8e5\") " pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.702584 4805 generic.go:334] "Generic (PLEG): container finished" podID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerID="81b332458e5446503ebdce7cd7a8399965d647dcf52d2627bab89cb0d58d5fbc" exitCode=0 Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.702629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerDied","Data":"81b332458e5446503ebdce7cd7a8399965d647dcf52d2627bab89cb0d58d5fbc"} Feb 17 00:44:45 crc kubenswrapper[4805]: I0217 00:44:45.716955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.256186 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.315074 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fd8fd677-jrz8c" Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.376781 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.377197 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-549f7bcc7b-l2thx" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-api" containerID="cri-o://eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff" gracePeriod=30 Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.377405 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-549f7bcc7b-l2thx" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-httpd" containerID="cri-o://c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1" gracePeriod=30 Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.748624 4805 generic.go:334] "Generic (PLEG): container finished" podID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerID="c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1" exitCode=0 Feb 17 00:44:49 crc kubenswrapper[4805]: I0217 00:44:49.748679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerDied","Data":"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1"} Feb 17 00:44:50 crc kubenswrapper[4805]: I0217 00:44:50.917869 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041231 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041755 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041826 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhmw8\" (UniqueName: \"kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041850 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.041919 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data\") pod \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\" (UID: \"d7a4ad03-df41-44c9-8bcf-e93f380484ad\") " Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.042101 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.042256 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.042589 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.042609 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7a4ad03-df41-44c9-8bcf-e93f380484ad-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.047477 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts" (OuterVolumeSpecName: "scripts") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.050940 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8" (OuterVolumeSpecName: "kube-api-access-mhmw8") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "kube-api-access-mhmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.083645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.144295 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.144357 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhmw8\" (UniqueName: \"kubernetes.io/projected/d7a4ad03-df41-44c9-8bcf-e93f380484ad-kube-api-access-mhmw8\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.144368 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.155735 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.179397 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data" (OuterVolumeSpecName: "config-data") pod "d7a4ad03-df41-44c9-8bcf-e93f380484ad" (UID: "d7a4ad03-df41-44c9-8bcf-e93f380484ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.218011 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b9959496c-vdvnd"] Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.246144 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.246169 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a4ad03-df41-44c9-8bcf-e93f380484ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.795789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9959496c-vdvnd" event={"ID":"f289780f-6025-465b-859f-e951ffd9e8e5","Type":"ContainerStarted","Data":"fe4aad7853f226c48aa518e4981e9086000247c8ca3c4d55ede593af57e980ea"} Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.796161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9959496c-vdvnd" event={"ID":"f289780f-6025-465b-859f-e951ffd9e8e5","Type":"ContainerStarted","Data":"5d6cca2b3add9ecc160d0d710b04db6c1d038444b12c9542d89930be36de1ab4"} Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.796220 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.796232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b9959496c-vdvnd" event={"ID":"f289780f-6025-465b-859f-e951ffd9e8e5","Type":"ContainerStarted","Data":"90cd13d0d525a1d346d40b4ca0c6936bb2f76f2674d1e9a5454d8ff8e741dc3e"} Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.802165 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.803845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7a4ad03-df41-44c9-8bcf-e93f380484ad","Type":"ContainerDied","Data":"234858edd981f951703723a12523ed2db827b289fecf18edf5b33b87e12ac85d"} Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.803884 4805 scope.go:117] "RemoveContainer" containerID="3b3cc504e00c7dd6898d9db2d98625fd13cc1eaad53ccc842f74e4f519ea19eb" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.807144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3d04a0a0-da8e-4d58-b70c-b0e60bd9660c","Type":"ContainerStarted","Data":"8334198ecd45de64d242cc41e71f929133273a5ad5cf202a78e72918c5bbb51c"} Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.843537 4805 scope.go:117] "RemoveContainer" containerID="3943583201b08de89e4884e80dbd323b6c85c81b065436353eadffc33e665dac" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.846243 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7b9959496c-vdvnd" podStartSLOduration=6.8462226059999995 podStartE2EDuration="6.846222606s" podCreationTimestamp="2026-02-17 00:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:51.817656022 +0000 UTC m=+1317.833465420" watchObservedRunningTime="2026-02-17 00:44:51.846222606 +0000 UTC m=+1317.862032004" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.850067 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.164004974 podStartE2EDuration="13.850055973s" podCreationTimestamp="2026-02-17 00:44:38 +0000 UTC" firstStartedPulling="2026-02-17 00:44:38.96300059 +0000 UTC m=+1304.978809988" lastFinishedPulling="2026-02-17 00:44:50.649051589 +0000 UTC m=+1316.664860987" observedRunningTime="2026-02-17 00:44:51.843742646 +0000 UTC m=+1317.859552044" watchObservedRunningTime="2026-02-17 00:44:51.850055973 +0000 UTC m=+1317.865865371" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.885318 4805 scope.go:117] "RemoveContainer" containerID="81b332458e5446503ebdce7cd7a8399965d647dcf52d2627bab89cb0d58d5fbc" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.886777 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.900886 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.929269 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:51 crc kubenswrapper[4805]: E0217 00:44:51.932032 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="proxy-httpd" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.932059 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="proxy-httpd" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.932661 4805 scope.go:117] "RemoveContainer" containerID="1f6dad299bcacac8b412475e8719caf523fce2bdef6f36fbb663c78df17440e5" Feb 17 00:44:51 crc kubenswrapper[4805]: E0217 00:44:51.934391 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-notification-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.934712 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-notification-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: E0217 00:44:51.934949 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="sg-core" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.934962 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="sg-core" Feb 17 00:44:51 crc kubenswrapper[4805]: E0217 00:44:51.935113 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-central-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.935124 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-central-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.935623 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-notification-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.935650 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="proxy-httpd" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.935671 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="ceilometer-central-agent" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.935686 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" containerName="sg-core" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.943145 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.943264 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.945563 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:44:51 crc kubenswrapper[4805]: I0217 00:44:51.945893 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.073696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.073826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.073857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfk2p\" (UniqueName: \"kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.073940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.073969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.074009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.074125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175821 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfk2p\" (UniqueName: \"kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.175994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.176035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.176392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.176826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.180211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.182437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.183770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.188411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.200192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfk2p\" (UniqueName: \"kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p\") pod \"ceilometer-0\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.266874 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.735834 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.796241 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a4ad03-df41-44c9-8bcf-e93f380484ad" path="/var/lib/kubelet/pods/d7a4ad03-df41-44c9-8bcf-e93f380484ad/volumes" Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.912081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerStarted","Data":"922d8eba0ce807b94cfb3b9524119292053375ae514d6e41e840f5ff2b1e6b50"} Feb 17 00:44:52 crc kubenswrapper[4805]: I0217 00:44:52.918169 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:44:53 crc kubenswrapper[4805]: I0217 00:44:53.922134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerStarted","Data":"433763dbce326caf7b981a4a30e6c7a73ea7e72ce1cf500d0e478dbc9a04288d"} Feb 17 00:44:54 crc kubenswrapper[4805]: I0217 00:44:54.578071 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:44:54 crc kubenswrapper[4805]: I0217 00:44:54.949850 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerStarted","Data":"9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a"} Feb 17 00:44:55 crc kubenswrapper[4805]: I0217 00:44:55.960280 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerStarted","Data":"9de12d9103f5f3e819e27a6675d3753026ea7f34e08a9e5dcf4e9550f3dfd34b"} Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.800344 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.824178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs\") pod \"e07b33ca-66f5-4047-b754-ac637f0db5a5\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.824287 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck4jd\" (UniqueName: \"kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd\") pod \"e07b33ca-66f5-4047-b754-ac637f0db5a5\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.826368 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config\") pod \"e07b33ca-66f5-4047-b754-ac637f0db5a5\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.826438 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config\") pod \"e07b33ca-66f5-4047-b754-ac637f0db5a5\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.826543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle\") pod \"e07b33ca-66f5-4047-b754-ac637f0db5a5\" (UID: \"e07b33ca-66f5-4047-b754-ac637f0db5a5\") " Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.835629 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e07b33ca-66f5-4047-b754-ac637f0db5a5" (UID: "e07b33ca-66f5-4047-b754-ac637f0db5a5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.830982 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd" (OuterVolumeSpecName: "kube-api-access-ck4jd") pod "e07b33ca-66f5-4047-b754-ac637f0db5a5" (UID: "e07b33ca-66f5-4047-b754-ac637f0db5a5"). InnerVolumeSpecName "kube-api-access-ck4jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.891098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config" (OuterVolumeSpecName: "config") pod "e07b33ca-66f5-4047-b754-ac637f0db5a5" (UID: "e07b33ca-66f5-4047-b754-ac637f0db5a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.898946 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e07b33ca-66f5-4047-b754-ac637f0db5a5" (UID: "e07b33ca-66f5-4047-b754-ac637f0db5a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.913849 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e07b33ca-66f5-4047-b754-ac637f0db5a5" (UID: "e07b33ca-66f5-4047-b754-ac637f0db5a5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.929399 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.929434 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.929443 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck4jd\" (UniqueName: \"kubernetes.io/projected/e07b33ca-66f5-4047-b754-ac637f0db5a5-kube-api-access-ck4jd\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.929454 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.929463 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e07b33ca-66f5-4047-b754-ac637f0db5a5-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.972821 4805 generic.go:334] "Generic (PLEG): container finished" podID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerID="eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff" exitCode=0 Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.972878 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-549f7bcc7b-l2thx" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.972887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerDied","Data":"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff"} Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.973050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-549f7bcc7b-l2thx" event={"ID":"e07b33ca-66f5-4047-b754-ac637f0db5a5","Type":"ContainerDied","Data":"22fb4d15018d0efcaded59a795f07c5576dcaf6d177b8ab9dff81c21f2548608"} Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.973069 4805 scope.go:117] "RemoveContainer" containerID="c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerStarted","Data":"d692ae24240deba5be9edc1680c5939230c3916252f34c348e16f74612133c91"} Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976803 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-central-agent" containerID="cri-o://433763dbce326caf7b981a4a30e6c7a73ea7e72ce1cf500d0e478dbc9a04288d" gracePeriod=30 Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976854 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976853 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="proxy-httpd" containerID="cri-o://d692ae24240deba5be9edc1680c5939230c3916252f34c348e16f74612133c91" gracePeriod=30 Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976889 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-notification-agent" containerID="cri-o://9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a" gracePeriod=30 Feb 17 00:44:56 crc kubenswrapper[4805]: I0217 00:44:56.976887 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="sg-core" containerID="cri-o://9de12d9103f5f3e819e27a6675d3753026ea7f34e08a9e5dcf4e9550f3dfd34b" gracePeriod=30 Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.004220 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.612047863 podStartE2EDuration="6.004202925s" podCreationTimestamp="2026-02-17 00:44:51 +0000 UTC" firstStartedPulling="2026-02-17 00:44:52.744711622 +0000 UTC m=+1318.760521030" lastFinishedPulling="2026-02-17 00:44:56.136866694 +0000 UTC m=+1322.152676092" observedRunningTime="2026-02-17 00:44:57.000383387 +0000 UTC m=+1323.016192795" watchObservedRunningTime="2026-02-17 00:44:57.004202925 +0000 UTC m=+1323.020012323" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.018559 4805 scope.go:117] "RemoveContainer" containerID="eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.036592 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.044691 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-549f7bcc7b-l2thx"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.056826 4805 scope.go:117] "RemoveContainer" containerID="c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1" Feb 17 00:44:57 crc kubenswrapper[4805]: E0217 00:44:57.057276 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1\": container with ID starting with c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1 not found: ID does not exist" containerID="c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.057332 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1"} err="failed to get container status \"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1\": rpc error: code = NotFound desc = could not find container \"c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1\": container with ID starting with c130853cef835054ed6d77f3e013caca9e4b295379a3ece7843b0f9565cd02f1 not found: ID does not exist" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.057353 4805 scope.go:117] "RemoveContainer" containerID="eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff" Feb 17 00:44:57 crc kubenswrapper[4805]: E0217 00:44:57.058203 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff\": container with ID starting with eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff not found: ID does not exist" containerID="eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.058235 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff"} err="failed to get container status \"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff\": rpc error: code = NotFound desc = could not find container \"eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff\": container with ID starting with eb2e181b9020401a8a3c5ee1dcf9ccba3d694549c597661ffc2f43c62799bdff not found: ID does not exist" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.265551 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:44:57 crc kubenswrapper[4805]: E0217 00:44:57.266229 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-httpd" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.266246 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-httpd" Feb 17 00:44:57 crc kubenswrapper[4805]: E0217 00:44:57.266278 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-api" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.266284 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-api" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.266615 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-httpd" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.266637 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" containerName="neutron-api" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.267270 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.271405 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.271638 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-5dc2m" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.271796 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.320215 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.344119 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.344190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.344296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tq2d\" (UniqueName: \"kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.344452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.412128 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.413982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.434556 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.446567 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.447732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.450918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.450979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451140 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tq2d\" (UniqueName: \"kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.451216 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnz4s\" (UniqueName: \"kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.457893 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.464089 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.465343 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.469549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.470257 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.474343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.477647 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.483000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.490291 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.491406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tq2d\" (UniqueName: \"kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d\") pod \"heat-engine-9c44689dd-p9ww5\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558728 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9459\" (UniqueName: \"kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558847 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnz4s\" (UniqueName: \"kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.558922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559026 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxsfz\" (UniqueName: \"kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.559547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.560064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.561956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.562691 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.564526 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.566126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.590015 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnz4s\" (UniqueName: \"kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s\") pod \"dnsmasq-dns-7756b9d78c-h5drq\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.622994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661031 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9459\" (UniqueName: \"kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.661339 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxsfz\" (UniqueName: \"kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.666402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.669887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.673972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.675829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.683027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.686357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9459\" (UniqueName: \"kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459\") pod \"heat-api-8b8fdf57c-f4j8b\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.694177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.700826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxsfz\" (UniqueName: \"kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz\") pod \"heat-cfnapi-6db4db54cd-59rhb\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:57 crc kubenswrapper[4805]: E0217 00:44:57.742990 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca122cb_0d44_4426_a51f_55ded72d70e7.slice/crio-conmon-9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.760892 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.886995 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:44:57 crc kubenswrapper[4805]: I0217 00:44:57.938906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.067708 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerID="d692ae24240deba5be9edc1680c5939230c3916252f34c348e16f74612133c91" exitCode=0 Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.068025 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerID="9de12d9103f5f3e819e27a6675d3753026ea7f34e08a9e5dcf4e9550f3dfd34b" exitCode=2 Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.068032 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerID="9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a" exitCode=0 Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.067800 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerDied","Data":"d692ae24240deba5be9edc1680c5939230c3916252f34c348e16f74612133c91"} Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.068070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerDied","Data":"9de12d9103f5f3e819e27a6675d3753026ea7f34e08a9e5dcf4e9550f3dfd34b"} Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.068084 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerDied","Data":"9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a"} Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.172809 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.334082 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.591307 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.601895 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:44:58 crc kubenswrapper[4805]: W0217 00:44:58.606387 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579b8385_e85e_43c6_b89d_51143c79b433.slice/crio-337e0f118c524ad136f3152cd9e600f25b4838e227bb79630cace00b3e18ed1d WatchSource:0}: Error finding container 337e0f118c524ad136f3152cd9e600f25b4838e227bb79630cace00b3e18ed1d: Status 404 returned error can't find the container with id 337e0f118c524ad136f3152cd9e600f25b4838e227bb79630cace00b3e18ed1d Feb 17 00:44:58 crc kubenswrapper[4805]: I0217 00:44:58.798161 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07b33ca-66f5-4047-b754-ac637f0db5a5" path="/var/lib/kubelet/pods/e07b33ca-66f5-4047-b754-ac637f0db5a5/volumes" Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.081781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" event={"ID":"10a79698-8e14-4327-8ce1-89b4d9ee2ff3","Type":"ContainerStarted","Data":"01d03d26da4d6008f53a4fe3f6ec63633b317c996b1f2338a9f8b6b025b37334"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.084092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b8fdf57c-f4j8b" event={"ID":"579b8385-e85e-43c6-b89d-51143c79b433","Type":"ContainerStarted","Data":"337e0f118c524ad136f3152cd9e600f25b4838e227bb79630cace00b3e18ed1d"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.086070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c44689dd-p9ww5" event={"ID":"8cc03862-2ea6-4041-badb-7902bc29fb9f","Type":"ContainerStarted","Data":"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.086208 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.086293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c44689dd-p9ww5" event={"ID":"8cc03862-2ea6-4041-badb-7902bc29fb9f","Type":"ContainerStarted","Data":"98b0cea9c38787d7759ae6578ce355af2a822fe3f5bf551574868bf4f81d6fcc"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.088474 4805 generic.go:334] "Generic (PLEG): container finished" podID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerID="5a9fe183d0abab2af57291061528cc05da8a451502bdb016fd2410e9e9190375" exitCode=0 Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.088533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" event={"ID":"2557baf3-efbc-4e37-bb54-e3b55b097025","Type":"ContainerDied","Data":"5a9fe183d0abab2af57291061528cc05da8a451502bdb016fd2410e9e9190375"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.088560 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" event={"ID":"2557baf3-efbc-4e37-bb54-e3b55b097025","Type":"ContainerStarted","Data":"f64de87027dd508e6df92d18a4d82289e67b38f0c327e53b566cd59970ae0297"} Feb 17 00:44:59 crc kubenswrapper[4805]: I0217 00:44:59.112867 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9c44689dd-p9ww5" podStartSLOduration=2.112848934 podStartE2EDuration="2.112848934s" podCreationTimestamp="2026-02-17 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:44:59.10098396 +0000 UTC m=+1325.116793358" watchObservedRunningTime="2026-02-17 00:44:59.112848934 +0000 UTC m=+1325.128658332" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.102466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" event={"ID":"2557baf3-efbc-4e37-bb54-e3b55b097025","Type":"ContainerStarted","Data":"6fcc98eb6f4388ff0020ffe754be43dce70a15dfa47a38ab0ea36dcfa8c19fed"} Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.102850 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.133103 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" podStartSLOduration=3.133083073 podStartE2EDuration="3.133083073s" podCreationTimestamp="2026-02-17 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:00.122183807 +0000 UTC m=+1326.137993215" watchObservedRunningTime="2026-02-17 00:45:00.133083073 +0000 UTC m=+1326.148892471" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.150821 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7"] Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.152493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.156031 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.159517 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7"] Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.160899 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.218992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.219057 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7zvv\" (UniqueName: \"kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.219187 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.320711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.321101 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7zvv\" (UniqueName: \"kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.321227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.323043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.336461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.348018 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7zvv\" (UniqueName: \"kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv\") pod \"collect-profiles-29521485-8qqm7\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.474666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.724246 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:45:00 crc kubenswrapper[4805]: I0217 00:45:00.726668 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b9959496c-vdvnd" Feb 17 00:45:01 crc kubenswrapper[4805]: I0217 00:45:01.752996 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7"] Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.123956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" event={"ID":"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f","Type":"ContainerStarted","Data":"afa9182a1a2d1b025fcc3ba0d28dfa7a791971efc3f8f09d41fe3288741303bc"} Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.124361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" event={"ID":"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f","Type":"ContainerStarted","Data":"d471a0cbdf8e8596ae9b80aecdea31e32c67eaadc860e162c56b221e5c2109ac"} Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.126611 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" event={"ID":"10a79698-8e14-4327-8ce1-89b4d9ee2ff3","Type":"ContainerStarted","Data":"38721d288d9d57712680bf249ea0f88ee5ca99c6a22c1046f991a2b67c556e85"} Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.127299 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.129657 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b8fdf57c-f4j8b" event={"ID":"579b8385-e85e-43c6-b89d-51143c79b433","Type":"ContainerStarted","Data":"c48da616d329acf8159d2a25ea1f86fdc8e7cb54a9e18ed53960f0859c780fa0"} Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.129784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.139276 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" podStartSLOduration=2.13925615 podStartE2EDuration="2.13925615s" podCreationTimestamp="2026-02-17 00:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:02.135966148 +0000 UTC m=+1328.151775546" watchObservedRunningTime="2026-02-17 00:45:02.13925615 +0000 UTC m=+1328.155065558" Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.170425 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" podStartSLOduration=2.552909768 podStartE2EDuration="5.170401906s" podCreationTimestamp="2026-02-17 00:44:57 +0000 UTC" firstStartedPulling="2026-02-17 00:44:58.606504934 +0000 UTC m=+1324.622314332" lastFinishedPulling="2026-02-17 00:45:01.223997072 +0000 UTC m=+1327.239806470" observedRunningTime="2026-02-17 00:45:02.160402995 +0000 UTC m=+1328.176212413" watchObservedRunningTime="2026-02-17 00:45:02.170401906 +0000 UTC m=+1328.186211324" Feb 17 00:45:02 crc kubenswrapper[4805]: I0217 00:45:02.185713 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-8b8fdf57c-f4j8b" podStartSLOduration=2.573119767 podStartE2EDuration="5.185689106s" podCreationTimestamp="2026-02-17 00:44:57 +0000 UTC" firstStartedPulling="2026-02-17 00:44:58.613141381 +0000 UTC m=+1324.628950779" lastFinishedPulling="2026-02-17 00:45:01.22571071 +0000 UTC m=+1327.241520118" observedRunningTime="2026-02-17 00:45:02.175769447 +0000 UTC m=+1328.191578845" watchObservedRunningTime="2026-02-17 00:45:02.185689106 +0000 UTC m=+1328.201498504" Feb 17 00:45:03 crc kubenswrapper[4805]: I0217 00:45:03.142471 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" containerID="afa9182a1a2d1b025fcc3ba0d28dfa7a791971efc3f8f09d41fe3288741303bc" exitCode=0 Feb 17 00:45:03 crc kubenswrapper[4805]: I0217 00:45:03.143120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" event={"ID":"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f","Type":"ContainerDied","Data":"afa9182a1a2d1b025fcc3ba0d28dfa7a791971efc3f8f09d41fe3288741303bc"} Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.147385 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b4c598ff7-vv75x"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.149397 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.168180 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.169555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.186426 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b4c598ff7-vv75x"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.200354 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.201635 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.213675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.256844 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-combined-ca-bundle\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vcm\" (UniqueName: \"kubernetes.io/projected/13d834c0-2408-456a-9ffd-9333c2c0e26e-kube-api-access-64vcm\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lsm\" (UniqueName: \"kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data-custom\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353774 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353807 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353919 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djr5g\" (UniqueName: \"kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.353986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.354007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458727 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lsm\" (UniqueName: \"kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data-custom\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458814 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djr5g\" (UniqueName: \"kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.458987 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-combined-ca-bundle\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.459021 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vcm\" (UniqueName: \"kubernetes.io/projected/13d834c0-2408-456a-9ffd-9333c2c0e26e-kube-api-access-64vcm\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.459060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.468105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.469020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data-custom\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.469135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.469860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.470681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.471835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.472452 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-combined-ca-bundle\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.473361 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13d834c0-2408-456a-9ffd-9333c2c0e26e-config-data\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.473896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.478730 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vcm\" (UniqueName: \"kubernetes.io/projected/13d834c0-2408-456a-9ffd-9333c2c0e26e-kube-api-access-64vcm\") pod \"heat-engine-7b4c598ff7-vv75x\" (UID: \"13d834c0-2408-456a-9ffd-9333c2c0e26e\") " pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.483601 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djr5g\" (UniqueName: \"kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g\") pod \"heat-api-dbb694f6f-kn89d\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.485315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lsm\" (UniqueName: \"kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm\") pod \"heat-cfnapi-8b5758cbb-lvlb7\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.500023 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.526949 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.749068 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.764504 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume\") pod \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.764651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7zvv\" (UniqueName: \"kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv\") pod \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.764857 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume\") pod \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\" (UID: \"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f\") " Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.769584 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume" (OuterVolumeSpecName: "config-volume") pod "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" (UID: "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.775878 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.776023 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.786466 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv" (OuterVolumeSpecName: "kube-api-access-h7zvv") pod "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" (UID: "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f"). InnerVolumeSpecName "kube-api-access-h7zvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.787720 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" (UID: "bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.877572 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7zvv\" (UniqueName: \"kubernetes.io/projected/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-kube-api-access-h7zvv\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:04 crc kubenswrapper[4805]: I0217 00:45:04.877598 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.092549 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.171131 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.171289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7" event={"ID":"bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f","Type":"ContainerDied","Data":"d471a0cbdf8e8596ae9b80aecdea31e32c67eaadc860e162c56b221e5c2109ac"} Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.171337 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d471a0cbdf8e8596ae9b80aecdea31e32c67eaadc860e162c56b221e5c2109ac" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.182530 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" event={"ID":"ddd72c63-70cf-4c86-8fab-be57a13993f3","Type":"ContainerStarted","Data":"f81b5277280e2a3283adf435faf484e6c95505836ff53fe8d93a473ddeca5773"} Feb 17 00:45:05 crc kubenswrapper[4805]: W0217 00:45:05.195364 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb384fc5_09b9_47e4_9ed0_06d7330e6abf.slice/crio-d272236c624153ac584fcd759ebc5d5a5b899f3285c08423d15968463f51a023 WatchSource:0}: Error finding container d272236c624153ac584fcd759ebc5d5a5b899f3285c08423d15968463f51a023: Status 404 returned error can't find the container with id d272236c624153ac584fcd759ebc5d5a5b899f3285c08423d15968463f51a023 Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.197976 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:05 crc kubenswrapper[4805]: W0217 00:45:05.342087 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13d834c0_2408_456a_9ffd_9333c2c0e26e.slice/crio-4c7ca48dee1cda40f7036d491d4df1a004deac122bf0bbd61885ba76467c5b8d WatchSource:0}: Error finding container 4c7ca48dee1cda40f7036d491d4df1a004deac122bf0bbd61885ba76467c5b8d: Status 404 returned error can't find the container with id 4c7ca48dee1cda40f7036d491d4df1a004deac122bf0bbd61885ba76467c5b8d Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.343284 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b4c598ff7-vv75x"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.776941 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.777134 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-8b8fdf57c-f4j8b" podUID="579b8385-e85e-43c6-b89d-51143c79b433" containerName="heat-api" containerID="cri-o://c48da616d329acf8159d2a25ea1f86fdc8e7cb54a9e18ed53960f0859c780fa0" gracePeriod=60 Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.825624 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.825823 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" podUID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" containerName="heat-cfnapi" containerID="cri-o://38721d288d9d57712680bf249ea0f88ee5ca99c6a22c1046f991a2b67c556e85" gracePeriod=60 Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.841532 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7756f86689-rb9tx"] Feb 17 00:45:05 crc kubenswrapper[4805]: E0217 00:45:05.842008 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" containerName="collect-profiles" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.842026 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" containerName="collect-profiles" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.842232 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" containerName="collect-profiles" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.843057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.845677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.845859 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.855389 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7756f86689-rb9tx"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.888194 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-875d6bfdc-p74bh"] Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.889423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.892300 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.892542 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.898733 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-combined-ca-bundle\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.898826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqg8x\" (UniqueName: \"kubernetes.io/projected/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-kube-api-access-sqg8x\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.898862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data-custom\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.898922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-internal-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.898984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.899004 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-public-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:05 crc kubenswrapper[4805]: I0217 00:45:05.913510 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-875d6bfdc-p74bh"] Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.000303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data-custom\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.000612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.000741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-public-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.000862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxcbt\" (UniqueName: \"kubernetes.io/projected/f164372a-5796-4984-8913-43ed2d3b5e6f-kube-api-access-bxcbt\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.000990 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-internal-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.001065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-combined-ca-bundle\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.001207 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-public-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.001458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-combined-ca-bundle\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.001642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.001734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqg8x\" (UniqueName: \"kubernetes.io/projected/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-kube-api-access-sqg8x\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.002035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data-custom\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.002158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-internal-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.005413 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data-custom\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.005686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-config-data\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.006521 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-internal-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.015185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-combined-ca-bundle\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.022604 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqg8x\" (UniqueName: \"kubernetes.io/projected/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-kube-api-access-sqg8x\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.022791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df3b59bd-7b58-4ea5-8cdb-f25fcbf13793-public-tls-certs\") pod \"heat-api-7756f86689-rb9tx\" (UID: \"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793\") " pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.107434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-public-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.107886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.107991 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data-custom\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.108088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxcbt\" (UniqueName: \"kubernetes.io/projected/f164372a-5796-4984-8913-43ed2d3b5e6f-kube-api-access-bxcbt\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.108558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-internal-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.108596 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-combined-ca-bundle\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.110807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-public-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.111730 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.115114 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-combined-ca-bundle\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.116069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-config-data-custom\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.123738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f164372a-5796-4984-8913-43ed2d3b5e6f-internal-tls-certs\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.126523 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxcbt\" (UniqueName: \"kubernetes.io/projected/f164372a-5796-4984-8913-43ed2d3b5e6f-kube-api-access-bxcbt\") pod \"heat-cfnapi-875d6bfdc-p74bh\" (UID: \"f164372a-5796-4984-8913-43ed2d3b5e6f\") " pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.162757 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.215451 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.222903 4805 generic.go:334] "Generic (PLEG): container finished" podID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerID="cca92da6008472edddd78059d6ac19d533c6ce9347a14d4b0344455ac1218757" exitCode=1 Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.223007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" event={"ID":"ddd72c63-70cf-4c86-8fab-be57a13993f3","Type":"ContainerDied","Data":"cca92da6008472edddd78059d6ac19d533c6ce9347a14d4b0344455ac1218757"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.223694 4805 scope.go:117] "RemoveContainer" containerID="cca92da6008472edddd78059d6ac19d533c6ce9347a14d4b0344455ac1218757" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.282682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b4c598ff7-vv75x" event={"ID":"13d834c0-2408-456a-9ffd-9333c2c0e26e","Type":"ContainerStarted","Data":"bf832569eb60017cc1e10a63790b57622489314a9de00891603a4aeda7c9dfe6"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.282736 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b4c598ff7-vv75x" event={"ID":"13d834c0-2408-456a-9ffd-9333c2c0e26e","Type":"ContainerStarted","Data":"4c7ca48dee1cda40f7036d491d4df1a004deac122bf0bbd61885ba76467c5b8d"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.283034 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.307696 4805 generic.go:334] "Generic (PLEG): container finished" podID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerID="ff54ab867e2731a4906c54b42ba11951fc64f64c590e25988ffed12b91dd0a53" exitCode=1 Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.307768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dbb694f6f-kn89d" event={"ID":"fb384fc5-09b9-47e4-9ed0-06d7330e6abf","Type":"ContainerDied","Data":"ff54ab867e2731a4906c54b42ba11951fc64f64c590e25988ffed12b91dd0a53"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.307794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dbb694f6f-kn89d" event={"ID":"fb384fc5-09b9-47e4-9ed0-06d7330e6abf","Type":"ContainerStarted","Data":"d272236c624153ac584fcd759ebc5d5a5b899f3285c08423d15968463f51a023"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.308522 4805 scope.go:117] "RemoveContainer" containerID="ff54ab867e2731a4906c54b42ba11951fc64f64c590e25988ffed12b91dd0a53" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.321872 4805 generic.go:334] "Generic (PLEG): container finished" podID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" containerID="38721d288d9d57712680bf249ea0f88ee5ca99c6a22c1046f991a2b67c556e85" exitCode=0 Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.321937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" event={"ID":"10a79698-8e14-4327-8ce1-89b4d9ee2ff3","Type":"ContainerDied","Data":"38721d288d9d57712680bf249ea0f88ee5ca99c6a22c1046f991a2b67c556e85"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.333657 4805 generic.go:334] "Generic (PLEG): container finished" podID="579b8385-e85e-43c6-b89d-51143c79b433" containerID="c48da616d329acf8159d2a25ea1f86fdc8e7cb54a9e18ed53960f0859c780fa0" exitCode=0 Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.333700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b8fdf57c-f4j8b" event={"ID":"579b8385-e85e-43c6-b89d-51143c79b433","Type":"ContainerDied","Data":"c48da616d329acf8159d2a25ea1f86fdc8e7cb54a9e18ed53960f0859c780fa0"} Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.341069 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b4c598ff7-vv75x" podStartSLOduration=2.341049121 podStartE2EDuration="2.341049121s" podCreationTimestamp="2026-02-17 00:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:06.310999616 +0000 UTC m=+1332.326809024" watchObservedRunningTime="2026-02-17 00:45:06.341049121 +0000 UTC m=+1332.356858519" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.771865 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.855708 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data\") pod \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.856378 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle\") pod \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.856483 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom\") pod \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.856589 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxsfz\" (UniqueName: \"kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz\") pod \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\" (UID: \"10a79698-8e14-4327-8ce1-89b4d9ee2ff3\") " Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.864757 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz" (OuterVolumeSpecName: "kube-api-access-hxsfz") pod "10a79698-8e14-4327-8ce1-89b4d9ee2ff3" (UID: "10a79698-8e14-4327-8ce1-89b4d9ee2ff3"). InnerVolumeSpecName "kube-api-access-hxsfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.865476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10a79698-8e14-4327-8ce1-89b4d9ee2ff3" (UID: "10a79698-8e14-4327-8ce1-89b4d9ee2ff3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.938547 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10a79698-8e14-4327-8ce1-89b4d9ee2ff3" (UID: "10a79698-8e14-4327-8ce1-89b4d9ee2ff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.941632 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.948766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data" (OuterVolumeSpecName: "config-data") pod "10a79698-8e14-4327-8ce1-89b4d9ee2ff3" (UID: "10a79698-8e14-4327-8ce1-89b4d9ee2ff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.959763 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.959791 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxsfz\" (UniqueName: \"kubernetes.io/projected/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-kube-api-access-hxsfz\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.959802 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:06 crc kubenswrapper[4805]: I0217 00:45:06.959810 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10a79698-8e14-4327-8ce1-89b4d9ee2ff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.060998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom\") pod \"579b8385-e85e-43c6-b89d-51143c79b433\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.061148 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data\") pod \"579b8385-e85e-43c6-b89d-51143c79b433\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.061225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9459\" (UniqueName: \"kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459\") pod \"579b8385-e85e-43c6-b89d-51143c79b433\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.061269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle\") pod \"579b8385-e85e-43c6-b89d-51143c79b433\" (UID: \"579b8385-e85e-43c6-b89d-51143c79b433\") " Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.324046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "579b8385-e85e-43c6-b89d-51143c79b433" (UID: "579b8385-e85e-43c6-b89d-51143c79b433"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.327171 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459" (OuterVolumeSpecName: "kube-api-access-w9459") pod "579b8385-e85e-43c6-b89d-51143c79b433" (UID: "579b8385-e85e-43c6-b89d-51143c79b433"). InnerVolumeSpecName "kube-api-access-w9459". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.327497 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.327514 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9459\" (UniqueName: \"kubernetes.io/projected/579b8385-e85e-43c6-b89d-51143c79b433-kube-api-access-w9459\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.329663 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "579b8385-e85e-43c6-b89d-51143c79b433" (UID: "579b8385-e85e-43c6-b89d-51143c79b433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.383466 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7756f86689-rb9tx"] Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.395594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dbb694f6f-kn89d" event={"ID":"fb384fc5-09b9-47e4-9ed0-06d7330e6abf","Type":"ContainerStarted","Data":"79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2"} Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.395850 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.397769 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data" (OuterVolumeSpecName: "config-data") pod "579b8385-e85e-43c6-b89d-51143c79b433" (UID: "579b8385-e85e-43c6-b89d-51143c79b433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.398402 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-875d6bfdc-p74bh"] Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.398902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" event={"ID":"10a79698-8e14-4327-8ce1-89b4d9ee2ff3","Type":"ContainerDied","Data":"01d03d26da4d6008f53a4fe3f6ec63633b317c996b1f2338a9f8b6b025b37334"} Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.398943 4805 scope.go:117] "RemoveContainer" containerID="38721d288d9d57712680bf249ea0f88ee5ca99c6a22c1046f991a2b67c556e85" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.399057 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6db4db54cd-59rhb" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.408891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8b8fdf57c-f4j8b" event={"ID":"579b8385-e85e-43c6-b89d-51143c79b433","Type":"ContainerDied","Data":"337e0f118c524ad136f3152cd9e600f25b4838e227bb79630cace00b3e18ed1d"} Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.409102 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8b8fdf57c-f4j8b" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.412220 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-dbb694f6f-kn89d" podStartSLOduration=3.412205093 podStartE2EDuration="3.412205093s" podCreationTimestamp="2026-02-17 00:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:07.410892406 +0000 UTC m=+1333.426701804" watchObservedRunningTime="2026-02-17 00:45:07.412205093 +0000 UTC m=+1333.428014491" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.414559 4805 generic.go:334] "Generic (PLEG): container finished" podID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" exitCode=1 Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.414610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" event={"ID":"ddd72c63-70cf-4c86-8fab-be57a13993f3","Type":"ContainerDied","Data":"d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b"} Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.427319 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerID="433763dbce326caf7b981a4a30e6c7a73ea7e72ce1cf500d0e478dbc9a04288d" exitCode=0 Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.428233 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerDied","Data":"433763dbce326caf7b981a4a30e6c7a73ea7e72ce1cf500d0e478dbc9a04288d"} Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.430633 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.430666 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579b8385-e85e-43c6-b89d-51143c79b433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.465050 4805 scope.go:117] "RemoveContainer" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" Feb 17 00:45:07 crc kubenswrapper[4805]: E0217 00:45:07.465390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8b5758cbb-lvlb7_openstack(ddd72c63-70cf-4c86-8fab-be57a13993f3)\"" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" Feb 17 00:45:07 crc kubenswrapper[4805]: W0217 00:45:07.589384 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf3b59bd_7b58_4ea5_8cdb_f25fcbf13793.slice/crio-8e6674f1ac4d1ec18617d6afcb129bfe2988e3c0073717239964e97ae7e74053 WatchSource:0}: Error finding container 8e6674f1ac4d1ec18617d6afcb129bfe2988e3c0073717239964e97ae7e74053: Status 404 returned error can't find the container with id 8e6674f1ac4d1ec18617d6afcb129bfe2988e3c0073717239964e97ae7e74053 Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.764115 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.840707 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.841222 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="dnsmasq-dns" containerID="cri-o://494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d" gracePeriod=10 Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.938382 4805 scope.go:117] "RemoveContainer" containerID="c48da616d329acf8159d2a25ea1f86fdc8e7cb54a9e18ed53960f0859c780fa0" Feb 17 00:45:07 crc kubenswrapper[4805]: I0217 00:45:07.999804 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.000539 4805 scope.go:117] "RemoveContainer" containerID="cca92da6008472edddd78059d6ac19d533c6ce9347a14d4b0344455ac1218757" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.009932 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.026648 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8b8fdf57c-f4j8b"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.044738 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.065453 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6db4db54cd-59rhb"] Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.072273 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10a79698_8e14_4327_8ce1_89b4d9ee2ff3.slice/crio-01d03d26da4d6008f53a4fe3f6ec63633b317c996b1f2338a9f8b6b025b37334\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd72c63_70cf_4c86_8fab_be57a13993f3.slice/crio-conmon-d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579b8385_e85e_43c6_b89d_51143c79b433.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab376c9f_5da0_4d6f_aca4_16c20967016d.slice/crio-conmon-494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab376c9f_5da0_4d6f_aca4_16c20967016d.slice/crio-494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143523 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143563 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143698 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfk2p\" (UniqueName: \"kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.143731 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data\") pod \"aca122cb-0d44-4426-a51f-55ded72d70e7\" (UID: \"aca122cb-0d44-4426-a51f-55ded72d70e7\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.145972 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.146394 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.151078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts" (OuterVolumeSpecName: "scripts") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.151602 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p" (OuterVolumeSpecName: "kube-api-access-sfk2p") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "kube-api-access-sfk2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.176895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.246600 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.246668 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.246684 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.246696 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aca122cb-0d44-4426-a51f-55ded72d70e7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.246707 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfk2p\" (UniqueName: \"kubernetes.io/projected/aca122cb-0d44-4426-a51f-55ded72d70e7-kube-api-access-sfk2p\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.294401 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data" (OuterVolumeSpecName: "config-data") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.304464 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca122cb-0d44-4426-a51f-55ded72d70e7" (UID: "aca122cb-0d44-4426-a51f-55ded72d70e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.348481 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.348527 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca122cb-0d44-4426-a51f-55ded72d70e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.378799 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.565588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756f86689-rb9tx" event={"ID":"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793","Type":"ContainerStarted","Data":"9dd6c078e250b32cb1d1754ba9a9e11e7143316e646a666f386c8ffb581bfbf8"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.565634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7756f86689-rb9tx" event={"ID":"df3b59bd-7b58-4ea5-8cdb-f25fcbf13793","Type":"ContainerStarted","Data":"8e6674f1ac4d1ec18617d6afcb129bfe2988e3c0073717239964e97ae7e74053"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.566909 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.572822 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" event={"ID":"f164372a-5796-4984-8913-43ed2d3b5e6f","Type":"ContainerStarted","Data":"3cf78f28831c3b84b34031fb4c66b03433663a03fbce34ea4442e0f9b711e823"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.572877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" event={"ID":"f164372a-5796-4984-8913-43ed2d3b5e6f","Type":"ContainerStarted","Data":"63f6c3e5a781db412268c03e9170f634445675ba41431d7d99ff8a0c563f3840"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.576552 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.578006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aca122cb-0d44-4426-a51f-55ded72d70e7","Type":"ContainerDied","Data":"922d8eba0ce807b94cfb3b9524119292053375ae514d6e41e840f5ff2b1e6b50"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.578054 4805 scope.go:117] "RemoveContainer" containerID="d692ae24240deba5be9edc1680c5939230c3916252f34c348e16f74612133c91" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579055 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkl6t\" (UniqueName: \"kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579159 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579384 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.579415 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.583889 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7756f86689-rb9tx" podStartSLOduration=3.583868202 podStartE2EDuration="3.583868202s" podCreationTimestamp="2026-02-17 00:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:08.580403885 +0000 UTC m=+1334.596213283" watchObservedRunningTime="2026-02-17 00:45:08.583868202 +0000 UTC m=+1334.599677600" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.586693 4805 generic.go:334] "Generic (PLEG): container finished" podID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerID="79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2" exitCode=1 Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.586785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dbb694f6f-kn89d" event={"ID":"fb384fc5-09b9-47e4-9ed0-06d7330e6abf","Type":"ContainerDied","Data":"79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.587529 4805 scope.go:117] "RemoveContainer" containerID="79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.587958 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-dbb694f6f-kn89d_openstack(fb384fc5-09b9-47e4-9ed0-06d7330e6abf)\"" pod="openstack/heat-api-dbb694f6f-kn89d" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.594544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t" (OuterVolumeSpecName: "kube-api-access-vkl6t") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "kube-api-access-vkl6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.627508 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerID="494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d" exitCode=0 Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.627573 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" event={"ID":"ab376c9f-5da0-4d6f-aca4-16c20967016d","Type":"ContainerDied","Data":"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.627598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" event={"ID":"ab376c9f-5da0-4d6f-aca4-16c20967016d","Type":"ContainerDied","Data":"a50ffc0bfe61f3513b27f10593b704839452679e0f6c240b7aa19807c277a760"} Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.627685 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5jpc8" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.636670 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" podStartSLOduration=3.636655606 podStartE2EDuration="3.636655606s" podCreationTimestamp="2026-02-17 00:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:08.60796499 +0000 UTC m=+1334.623774388" watchObservedRunningTime="2026-02-17 00:45:08.636655606 +0000 UTC m=+1334.652465004" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.647130 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config" (OuterVolumeSpecName: "config") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.647492 4805 scope.go:117] "RemoveContainer" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.647744 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8b5758cbb-lvlb7_openstack(ddd72c63-70cf-4c86-8fab-be57a13993f3)\"" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.647931 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.662718 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb podName:ab376c9f-5da0-4d6f-aca4-16c20967016d nodeName:}" failed. No retries permitted until 2026-02-17 00:45:09.162695879 +0000 UTC m=+1335.178505277 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d") : error deleting /var/lib/kubelet/pods/ab376c9f-5da0-4d6f-aca4-16c20967016d/volume-subpaths: remove /var/lib/kubelet/pods/ab376c9f-5da0-4d6f-aca4-16c20967016d/volume-subpaths: no such file or directory Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.663006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.663710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.681937 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.681967 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkl6t\" (UniqueName: \"kubernetes.io/projected/ab376c9f-5da0-4d6f-aca4-16c20967016d-kube-api-access-vkl6t\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.681977 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.681985 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.681995 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.760284 4805 scope.go:117] "RemoveContainer" containerID="9de12d9103f5f3e819e27a6675d3753026ea7f34e08a9e5dcf4e9550f3dfd34b" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.771482 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.780451 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.785756 4805 scope.go:117] "RemoveContainer" containerID="9b365220ae01b53ea7ec7674248cc59cc837c5ee3e23520a7d7b02086b4a838a" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.802655 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" path="/var/lib/kubelet/pods/10a79698-8e14-4327-8ce1-89b4d9ee2ff3/volumes" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.803284 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579b8385-e85e-43c6-b89d-51143c79b433" path="/var/lib/kubelet/pods/579b8385-e85e-43c6-b89d-51143c79b433/volumes" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.803903 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" path="/var/lib/kubelet/pods/aca122cb-0d44-4426-a51f-55ded72d70e7/volumes" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.806265 4805 scope.go:117] "RemoveContainer" containerID="433763dbce326caf7b981a4a30e6c7a73ea7e72ce1cf500d0e478dbc9a04288d" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.807566 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808344 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" containerName="heat-cfnapi" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808362 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" containerName="heat-cfnapi" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808387 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="sg-core" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808393 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="sg-core" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808403 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="proxy-httpd" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808409 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="proxy-httpd" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808423 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="init" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808429 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="init" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808443 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="dnsmasq-dns" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808450 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="dnsmasq-dns" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808462 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579b8385-e85e-43c6-b89d-51143c79b433" containerName="heat-api" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808468 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="579b8385-e85e-43c6-b89d-51143c79b433" containerName="heat-api" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808479 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-central-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808484 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-central-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.808507 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-notification-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808514 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-notification-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808708 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="proxy-httpd" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808720 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" containerName="dnsmasq-dns" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808728 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a79698-8e14-4327-8ce1-89b4d9ee2ff3" containerName="heat-cfnapi" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808737 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-notification-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808748 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="ceilometer-central-agent" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808762 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca122cb-0d44-4426-a51f-55ded72d70e7" containerName="sg-core" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.808774 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="579b8385-e85e-43c6-b89d-51143c79b433" containerName="heat-api" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.812775 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.812903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.823769 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.824795 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.861380 4805 scope.go:117] "RemoveContainer" containerID="ff54ab867e2731a4906c54b42ba11951fc64f64c590e25988ffed12b91dd0a53" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.885005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.885098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.885143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.885230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.885928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.886060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjb9\" (UniqueName: \"kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.886128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.922256 4805 scope.go:117] "RemoveContainer" containerID="494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.955274 4805 scope.go:117] "RemoveContainer" containerID="a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.977374 4805 scope.go:117] "RemoveContainer" containerID="494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.977850 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d\": container with ID starting with 494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d not found: ID does not exist" containerID="494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.977879 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d"} err="failed to get container status \"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d\": rpc error: code = NotFound desc = could not find container \"494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d\": container with ID starting with 494fc7ecbdf05eda8ce82b5c39f1b369fca44a648f21c5f70c52edb72e9bc75d not found: ID does not exist" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.977898 4805 scope.go:117] "RemoveContainer" containerID="a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc" Feb 17 00:45:08 crc kubenswrapper[4805]: E0217 00:45:08.978265 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc\": container with ID starting with a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc not found: ID does not exist" containerID="a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.978290 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc"} err="failed to get container status \"a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc\": rpc error: code = NotFound desc = could not find container \"a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc\": container with ID starting with a6e39bd1f3788c3e4a4e87c50da9b9e609ba347ace019b6bf0d5cfc5c0632ecc not found: ID does not exist" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.988772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.988811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.988906 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnjb9\" (UniqueName: \"kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.988945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.988980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.989010 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.989048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.989283 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.989527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.993788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.994980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.996102 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:08 crc kubenswrapper[4805]: I0217 00:45:08.999706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.012859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnjb9\" (UniqueName: \"kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9\") pod \"ceilometer-0\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " pod="openstack/ceilometer-0" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.138483 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.193268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") pod \"ab376c9f-5da0-4d6f-aca4-16c20967016d\" (UID: \"ab376c9f-5da0-4d6f-aca4-16c20967016d\") " Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.193827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab376c9f-5da0-4d6f-aca4-16c20967016d" (UID: "ab376c9f-5da0-4d6f-aca4-16c20967016d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.194443 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab376c9f-5da0-4d6f-aca4-16c20967016d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.282620 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.299619 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5jpc8"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.502606 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.502732 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.528406 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.670698 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.677921 4805 scope.go:117] "RemoveContainer" containerID="79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2" Feb 17 00:45:09 crc kubenswrapper[4805]: E0217 00:45:09.678237 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-dbb694f6f-kn89d_openstack(fb384fc5-09b9-47e4-9ed0-06d7330e6abf)\"" pod="openstack/heat-api-dbb694f6f-kn89d" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.683274 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.683517 4805 scope.go:117] "RemoveContainer" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" Feb 17 00:45:09 crc kubenswrapper[4805]: E0217 00:45:09.683745 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8b5758cbb-lvlb7_openstack(ddd72c63-70cf-4c86-8fab-be57a13993f3)\"" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.905573 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sflzs"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.907229 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.917269 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d364-account-create-update-r4pjv"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.918818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.920820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.931602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sflzs"] Feb 17 00:45:09 crc kubenswrapper[4805]: I0217 00:45:09.956476 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d364-account-create-update-r4pjv"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.017404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6wm\" (UniqueName: \"kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.017568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.017629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxwq5\" (UniqueName: \"kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.017677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.024254 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fdg8l"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.025784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.039367 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fdg8l"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.122754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mxk\" (UniqueName: \"kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.122866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6wm\" (UniqueName: \"kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.123029 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.123062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.123134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxwq5\" (UniqueName: \"kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.123181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.124892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.125692 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.145868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxwq5\" (UniqueName: \"kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5\") pod \"nova-api-d364-account-create-update-r4pjv\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.149650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6wm\" (UniqueName: \"kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm\") pod \"nova-api-db-create-sflzs\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.153734 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2m728"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.158014 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.163373 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-43ce-account-create-update-l5hkp"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.164757 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.166643 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.215905 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2m728"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.228606 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43ce-account-create-update-l5hkp"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.229981 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.230287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mxk\" (UniqueName: \"kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.230947 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.232949 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.241455 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.249077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mxk\" (UniqueName: \"kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk\") pod \"nova-cell0-db-create-fdg8l\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.324571 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d59a-account-create-update-sddjc"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.326246 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.328482 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.332184 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4rt\" (UniqueName: \"kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.332249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.332443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.332536 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvwx\" (UniqueName: \"kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.334431 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d59a-account-create-update-sddjc"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.403787 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.434758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.434870 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.434938 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzt55\" (UniqueName: \"kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.434967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gvwx\" (UniqueName: \"kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.435002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4rt\" (UniqueName: \"kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.435018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.435555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.435701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.457642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4rt\" (UniqueName: \"kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt\") pod \"nova-cell0-43ce-account-create-update-l5hkp\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.479760 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gvwx\" (UniqueName: \"kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx\") pod \"nova-cell1-db-create-2m728\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.547121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzt55\" (UniqueName: \"kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.547193 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.548782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.599519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzt55\" (UniqueName: \"kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55\") pod \"nova-cell1-d59a-account-create-update-sddjc\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.658998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.691722 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.696506 4805 scope.go:117] "RemoveContainer" containerID="79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2" Feb 17 00:45:10 crc kubenswrapper[4805]: E0217 00:45:10.696702 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-dbb694f6f-kn89d_openstack(fb384fc5-09b9-47e4-9ed0-06d7330e6abf)\"" pod="openstack/heat-api-dbb694f6f-kn89d" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.697031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerStarted","Data":"4fb0b35b0566673b5817b35e38ab7392afb0dc13ddec9f45478d9dee05941f35"} Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.697053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerStarted","Data":"6233fafdeff89ca95995afc301f08a780ba6f4184279e9ca422edd4435749189"} Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.697348 4805 scope.go:117] "RemoveContainer" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" Feb 17 00:45:10 crc kubenswrapper[4805]: E0217 00:45:10.697507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8b5758cbb-lvlb7_openstack(ddd72c63-70cf-4c86-8fab-be57a13993f3)\"" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.701049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.823749 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab376c9f-5da0-4d6f-aca4-16c20967016d" path="/var/lib/kubelet/pods/ab376c9f-5da0-4d6f-aca4-16c20967016d/volumes" Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.874990 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d364-account-create-update-r4pjv"] Feb 17 00:45:10 crc kubenswrapper[4805]: I0217 00:45:10.958013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sflzs"] Feb 17 00:45:10 crc kubenswrapper[4805]: W0217 00:45:10.982787 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d764513_224d_4ccb_acc5_49f319acaa63.slice/crio-f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada WatchSource:0}: Error finding container f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada: Status 404 returned error can't find the container with id f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.037195 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fdg8l"] Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.256344 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43ce-account-create-update-l5hkp"] Feb 17 00:45:11 crc kubenswrapper[4805]: W0217 00:45:11.275910 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc948a0e_80b8_4692_997e_7c034e6e0b26.slice/crio-f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11 WatchSource:0}: Error finding container f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11: Status 404 returned error can't find the container with id f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11 Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.342427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d59a-account-create-update-sddjc"] Feb 17 00:45:11 crc kubenswrapper[4805]: W0217 00:45:11.343119 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d4ac3b_a1b7_4e76_9ece_2b53b976e05b.slice/crio-9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d WatchSource:0}: Error finding container 9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d: Status 404 returned error can't find the container with id 9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d Feb 17 00:45:11 crc kubenswrapper[4805]: W0217 00:45:11.345622 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2828a3f7_804a_467f_aeb0_f0a2aab63c85.slice/crio-5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c WatchSource:0}: Error finding container 5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c: Status 404 returned error can't find the container with id 5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.368396 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2m728"] Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.710434 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" event={"ID":"fc948a0e-80b8-4692-997e-7c034e6e0b26","Type":"ContainerStarted","Data":"cc6debe96d1ba6f753a8fa21cb99e24b28660a8c260a191f527b69659733a9b7"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.710503 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" event={"ID":"fc948a0e-80b8-4692-997e-7c034e6e0b26","Type":"ContainerStarted","Data":"f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.714413 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerStarted","Data":"0efb05d73511cd236030ff091d24159d1903744b24cc8d9677024977b7658b5b"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.716506 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" event={"ID":"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b","Type":"ContainerStarted","Data":"9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.719757 4805 generic.go:334] "Generic (PLEG): container finished" podID="068c01a0-347f-401a-bac0-b0e82bb04e7d" containerID="e0fd1dd8d942807fe2dfa5240e3be3bbe6fb9d94151dafd469fffeed4031f486" exitCode=0 Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.719935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d364-account-create-update-r4pjv" event={"ID":"068c01a0-347f-401a-bac0-b0e82bb04e7d","Type":"ContainerDied","Data":"e0fd1dd8d942807fe2dfa5240e3be3bbe6fb9d94151dafd469fffeed4031f486"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.719970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d364-account-create-update-r4pjv" event={"ID":"068c01a0-347f-401a-bac0-b0e82bb04e7d","Type":"ContainerStarted","Data":"40bc3b1a7a5b32c965b7d777167a5d3412e786f7d7f10f9a0eab85371ef952a1"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.721287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2m728" event={"ID":"2828a3f7-804a-467f-aeb0-f0a2aab63c85","Type":"ContainerStarted","Data":"5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.722568 4805 generic.go:334] "Generic (PLEG): container finished" podID="a4244588-a78b-401f-be2f-9d1c4f70fc40" containerID="201d67b148cd31ce445883bf4e7186640714adb6981db38625ca86574f6e3442" exitCode=0 Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.722624 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fdg8l" event={"ID":"a4244588-a78b-401f-be2f-9d1c4f70fc40","Type":"ContainerDied","Data":"201d67b148cd31ce445883bf4e7186640714adb6981db38625ca86574f6e3442"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.722643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fdg8l" event={"ID":"a4244588-a78b-401f-be2f-9d1c4f70fc40","Type":"ContainerStarted","Data":"4432d66764c72be9da3ef8b5c14db0c866dfbc5a874f0196161d90d8b7384ee1"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.724225 4805 generic.go:334] "Generic (PLEG): container finished" podID="7d764513-224d-4ccb-acc5-49f319acaa63" containerID="ee9588fea770657fb2ad8fb91aaf3dac6c8b272b0804d899c477ec6534290196" exitCode=0 Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.724259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sflzs" event={"ID":"7d764513-224d-4ccb-acc5-49f319acaa63","Type":"ContainerDied","Data":"ee9588fea770657fb2ad8fb91aaf3dac6c8b272b0804d899c477ec6534290196"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.724278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sflzs" event={"ID":"7d764513-224d-4ccb-acc5-49f319acaa63","Type":"ContainerStarted","Data":"f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada"} Feb 17 00:45:11 crc kubenswrapper[4805]: I0217 00:45:11.737794 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" podStartSLOduration=1.737772234 podStartE2EDuration="1.737772234s" podCreationTimestamp="2026-02-17 00:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:11.726868827 +0000 UTC m=+1337.742678225" watchObservedRunningTime="2026-02-17 00:45:11.737772234 +0000 UTC m=+1337.753581632" Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.737111 4805 generic.go:334] "Generic (PLEG): container finished" podID="2828a3f7-804a-467f-aeb0-f0a2aab63c85" containerID="c990bf1ca91d471d573bf212cec9c762c283af21285f8df889311e8dc4430c43" exitCode=0 Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.737485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2m728" event={"ID":"2828a3f7-804a-467f-aeb0-f0a2aab63c85","Type":"ContainerDied","Data":"c990bf1ca91d471d573bf212cec9c762c283af21285f8df889311e8dc4430c43"} Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.739879 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc948a0e-80b8-4692-997e-7c034e6e0b26" containerID="cc6debe96d1ba6f753a8fa21cb99e24b28660a8c260a191f527b69659733a9b7" exitCode=0 Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.739935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" event={"ID":"fc948a0e-80b8-4692-997e-7c034e6e0b26","Type":"ContainerDied","Data":"cc6debe96d1ba6f753a8fa21cb99e24b28660a8c260a191f527b69659733a9b7"} Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.742667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerStarted","Data":"889c20424d7d2fa30eb7ac1d79ada04ba4e086f8d03b7e5d5202d82dc32ec1b1"} Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.744813 4805 generic.go:334] "Generic (PLEG): container finished" podID="f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" containerID="2de4fb278c535f7e0e137671be608bdbfc1db2791b94a46f4c39e309374d9ee5" exitCode=0 Feb 17 00:45:12 crc kubenswrapper[4805]: I0217 00:45:12.745077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" event={"ID":"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b","Type":"ContainerDied","Data":"2de4fb278c535f7e0e137671be608bdbfc1db2791b94a46f4c39e309374d9ee5"} Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.627555 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.633091 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.638147 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724409 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxwq5\" (UniqueName: \"kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5\") pod \"068c01a0-347f-401a-bac0-b0e82bb04e7d\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724665 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts\") pod \"7d764513-224d-4ccb-acc5-49f319acaa63\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts\") pod \"068c01a0-347f-401a-bac0-b0e82bb04e7d\" (UID: \"068c01a0-347f-401a-bac0-b0e82bb04e7d\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724754 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4mxk\" (UniqueName: \"kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk\") pod \"a4244588-a78b-401f-be2f-9d1c4f70fc40\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724811 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6wm\" (UniqueName: \"kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm\") pod \"7d764513-224d-4ccb-acc5-49f319acaa63\" (UID: \"7d764513-224d-4ccb-acc5-49f319acaa63\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.724837 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts\") pod \"a4244588-a78b-401f-be2f-9d1c4f70fc40\" (UID: \"a4244588-a78b-401f-be2f-9d1c4f70fc40\") " Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.725704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4244588-a78b-401f-be2f-9d1c4f70fc40" (UID: "a4244588-a78b-401f-be2f-9d1c4f70fc40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.726357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "068c01a0-347f-401a-bac0-b0e82bb04e7d" (UID: "068c01a0-347f-401a-bac0-b0e82bb04e7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.726643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d764513-224d-4ccb-acc5-49f319acaa63" (UID: "7d764513-224d-4ccb-acc5-49f319acaa63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.730281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm" (OuterVolumeSpecName: "kube-api-access-mv6wm") pod "7d764513-224d-4ccb-acc5-49f319acaa63" (UID: "7d764513-224d-4ccb-acc5-49f319acaa63"). InnerVolumeSpecName "kube-api-access-mv6wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.731312 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk" (OuterVolumeSpecName: "kube-api-access-x4mxk") pod "a4244588-a78b-401f-be2f-9d1c4f70fc40" (UID: "a4244588-a78b-401f-be2f-9d1c4f70fc40"). InnerVolumeSpecName "kube-api-access-x4mxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.731376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5" (OuterVolumeSpecName: "kube-api-access-sxwq5") pod "068c01a0-347f-401a-bac0-b0e82bb04e7d" (UID: "068c01a0-347f-401a-bac0-b0e82bb04e7d"). InnerVolumeSpecName "kube-api-access-sxwq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.756861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerStarted","Data":"527cba162d34f309854b97cef664358bf83b64e5e60f0b67d2c0cf23072f4bff"} Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.758309 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.760643 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fdg8l" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.760927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fdg8l" event={"ID":"a4244588-a78b-401f-be2f-9d1c4f70fc40","Type":"ContainerDied","Data":"4432d66764c72be9da3ef8b5c14db0c866dfbc5a874f0196161d90d8b7384ee1"} Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.760962 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4432d66764c72be9da3ef8b5c14db0c866dfbc5a874f0196161d90d8b7384ee1" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.762778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sflzs" event={"ID":"7d764513-224d-4ccb-acc5-49f319acaa63","Type":"ContainerDied","Data":"f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada"} Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.762807 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fe487daf821672becc5b30f1e9c20dbd897c433cd8692b568e78c2d2bd8ada" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.762861 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sflzs" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.765371 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d364-account-create-update-r4pjv" event={"ID":"068c01a0-347f-401a-bac0-b0e82bb04e7d","Type":"ContainerDied","Data":"40bc3b1a7a5b32c965b7d777167a5d3412e786f7d7f10f9a0eab85371ef952a1"} Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.765399 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bc3b1a7a5b32c965b7d777167a5d3412e786f7d7f10f9a0eab85371ef952a1" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.765411 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d364-account-create-update-r4pjv" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.787944 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.033387324 podStartE2EDuration="5.787925647s" podCreationTimestamp="2026-02-17 00:45:08 +0000 UTC" firstStartedPulling="2026-02-17 00:45:09.681496239 +0000 UTC m=+1335.697305637" lastFinishedPulling="2026-02-17 00:45:13.436034552 +0000 UTC m=+1339.451843960" observedRunningTime="2026-02-17 00:45:13.779714216 +0000 UTC m=+1339.795523624" watchObservedRunningTime="2026-02-17 00:45:13.787925647 +0000 UTC m=+1339.803735055" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828737 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxwq5\" (UniqueName: \"kubernetes.io/projected/068c01a0-347f-401a-bac0-b0e82bb04e7d-kube-api-access-sxwq5\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828767 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d764513-224d-4ccb-acc5-49f319acaa63-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828783 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c01a0-347f-401a-bac0-b0e82bb04e7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828799 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4mxk\" (UniqueName: \"kubernetes.io/projected/a4244588-a78b-401f-be2f-9d1c4f70fc40-kube-api-access-x4mxk\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828813 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6wm\" (UniqueName: \"kubernetes.io/projected/7d764513-224d-4ccb-acc5-49f319acaa63-kube-api-access-mv6wm\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:13 crc kubenswrapper[4805]: I0217 00:45:13.828826 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4244588-a78b-401f-be2f-9d1c4f70fc40-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.094475 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.237056 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts\") pod \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.237170 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gvwx\" (UniqueName: \"kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx\") pod \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\" (UID: \"2828a3f7-804a-467f-aeb0-f0a2aab63c85\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.237652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2828a3f7-804a-467f-aeb0-f0a2aab63c85" (UID: "2828a3f7-804a-467f-aeb0-f0a2aab63c85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.241248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx" (OuterVolumeSpecName: "kube-api-access-5gvwx") pod "2828a3f7-804a-467f-aeb0-f0a2aab63c85" (UID: "2828a3f7-804a-467f-aeb0-f0a2aab63c85"). InnerVolumeSpecName "kube-api-access-5gvwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.340717 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828a3f7-804a-467f-aeb0-f0a2aab63c85-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.341206 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gvwx\" (UniqueName: \"kubernetes.io/projected/2828a3f7-804a-467f-aeb0-f0a2aab63c85-kube-api-access-5gvwx\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.482062 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.488934 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.544735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp4rt\" (UniqueName: \"kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt\") pod \"fc948a0e-80b8-4692-997e-7c034e6e0b26\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.544824 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts\") pod \"fc948a0e-80b8-4692-997e-7c034e6e0b26\" (UID: \"fc948a0e-80b8-4692-997e-7c034e6e0b26\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.544890 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzt55\" (UniqueName: \"kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55\") pod \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.545057 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts\") pod \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\" (UID: \"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b\") " Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.545992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc948a0e-80b8-4692-997e-7c034e6e0b26" (UID: "fc948a0e-80b8-4692-997e-7c034e6e0b26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.546076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" (UID: "f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.573704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt" (OuterVolumeSpecName: "kube-api-access-pp4rt") pod "fc948a0e-80b8-4692-997e-7c034e6e0b26" (UID: "fc948a0e-80b8-4692-997e-7c034e6e0b26"). InnerVolumeSpecName "kube-api-access-pp4rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.573793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55" (OuterVolumeSpecName: "kube-api-access-lzt55") pod "f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" (UID: "f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b"). InnerVolumeSpecName "kube-api-access-lzt55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.647699 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.647737 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp4rt\" (UniqueName: \"kubernetes.io/projected/fc948a0e-80b8-4692-997e-7c034e6e0b26-kube-api-access-pp4rt\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.647751 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc948a0e-80b8-4692-997e-7c034e6e0b26-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.647764 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzt55\" (UniqueName: \"kubernetes.io/projected/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b-kube-api-access-lzt55\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.774548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" event={"ID":"f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b","Type":"ContainerDied","Data":"9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d"} Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.774607 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1276bd996ad01613b60e5df048cb54746883ac1664200f1c59a4010b509a0d" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.774576 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d59a-account-create-update-sddjc" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.775705 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2m728" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.775708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2m728" event={"ID":"2828a3f7-804a-467f-aeb0-f0a2aab63c85","Type":"ContainerDied","Data":"5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c"} Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.775755 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb85bc24a79f2225e947059a0865891f4a4221583169d16faaa6d979b974f5c" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.776959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" event={"ID":"fc948a0e-80b8-4692-997e-7c034e6e0b26","Type":"ContainerDied","Data":"f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11"} Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.776994 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83d5c23b14fcdcc03b9eff0b599e122077e13a00173f2d6f0e75cb60b7ffc11" Feb 17 00:45:14 crc kubenswrapper[4805]: I0217 00:45:14.776977 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43ce-account-create-update-l5hkp" Feb 17 00:45:17 crc kubenswrapper[4805]: I0217 00:45:17.656391 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:45:17 crc kubenswrapper[4805]: I0217 00:45:17.727590 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-875d6bfdc-p74bh" Feb 17 00:45:17 crc kubenswrapper[4805]: I0217 00:45:17.741795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7756f86689-rb9tx" Feb 17 00:45:17 crc kubenswrapper[4805]: I0217 00:45:17.796708 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:17 crc kubenswrapper[4805]: I0217 00:45:17.809836 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.790793 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.835563 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle\") pod \"ddd72c63-70cf-4c86-8fab-be57a13993f3\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873480 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data\") pod \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873546 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data\") pod \"ddd72c63-70cf-4c86-8fab-be57a13993f3\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873613 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom\") pod \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873709 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djr5g\" (UniqueName: \"kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g\") pod \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle\") pod \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\" (UID: \"fb384fc5-09b9-47e4-9ed0-06d7330e6abf\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873842 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2lsm\" (UniqueName: \"kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm\") pod \"ddd72c63-70cf-4c86-8fab-be57a13993f3\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.873873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom\") pod \"ddd72c63-70cf-4c86-8fab-be57a13993f3\" (UID: \"ddd72c63-70cf-4c86-8fab-be57a13993f3\") " Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.888071 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ddd72c63-70cf-4c86-8fab-be57a13993f3" (UID: "ddd72c63-70cf-4c86-8fab-be57a13993f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.889887 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb384fc5-09b9-47e4-9ed0-06d7330e6abf" (UID: "fb384fc5-09b9-47e4-9ed0-06d7330e6abf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.890482 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm" (OuterVolumeSpecName: "kube-api-access-v2lsm") pod "ddd72c63-70cf-4c86-8fab-be57a13993f3" (UID: "ddd72c63-70cf-4c86-8fab-be57a13993f3"). InnerVolumeSpecName "kube-api-access-v2lsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.895515 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g" (OuterVolumeSpecName: "kube-api-access-djr5g") pod "fb384fc5-09b9-47e4-9ed0-06d7330e6abf" (UID: "fb384fc5-09b9-47e4-9ed0-06d7330e6abf"). InnerVolumeSpecName "kube-api-access-djr5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.921416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb384fc5-09b9-47e4-9ed0-06d7330e6abf" (UID: "fb384fc5-09b9-47e4-9ed0-06d7330e6abf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.959485 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddd72c63-70cf-4c86-8fab-be57a13993f3" (UID: "ddd72c63-70cf-4c86-8fab-be57a13993f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976239 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976268 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976279 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djr5g\" (UniqueName: \"kubernetes.io/projected/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-kube-api-access-djr5g\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976289 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976298 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2lsm\" (UniqueName: \"kubernetes.io/projected/ddd72c63-70cf-4c86-8fab-be57a13993f3-kube-api-access-v2lsm\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976306 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.976886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data" (OuterVolumeSpecName: "config-data") pod "fb384fc5-09b9-47e4-9ed0-06d7330e6abf" (UID: "fb384fc5-09b9-47e4-9ed0-06d7330e6abf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:18 crc kubenswrapper[4805]: I0217 00:45:18.997406 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data" (OuterVolumeSpecName: "config-data") pod "ddd72c63-70cf-4c86-8fab-be57a13993f3" (UID: "ddd72c63-70cf-4c86-8fab-be57a13993f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.078112 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb384fc5-09b9-47e4-9ed0-06d7330e6abf-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.078357 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddd72c63-70cf-4c86-8fab-be57a13993f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.186098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" event={"ID":"ddd72c63-70cf-4c86-8fab-be57a13993f3","Type":"ContainerDied","Data":"f81b5277280e2a3283adf435faf484e6c95505836ff53fe8d93a473ddeca5773"} Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.186151 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8b5758cbb-lvlb7" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.186240 4805 scope.go:117] "RemoveContainer" containerID="d0e8be38afa9691741ed8c1d75920310aa840b5a9a2fde82aff84e4a1a1a8c0b" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.188114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-dbb694f6f-kn89d" event={"ID":"fb384fc5-09b9-47e4-9ed0-06d7330e6abf","Type":"ContainerDied","Data":"d272236c624153ac584fcd759ebc5d5a5b899f3285c08423d15968463f51a023"} Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.188217 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-dbb694f6f-kn89d" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.232381 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.232525 4805 scope.go:117] "RemoveContainer" containerID="79060f9a08bb54f3dc88c430e4de297c64e0199f2f4dd182ce04467c6dc2e3c2" Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.243427 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8b5758cbb-lvlb7"] Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.252366 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:19 crc kubenswrapper[4805]: I0217 00:45:19.261783 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-dbb694f6f-kn89d"] Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.418682 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2rpd"] Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419401 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419416 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419436 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419450 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2828a3f7-804a-467f-aeb0-f0a2aab63c85" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2828a3f7-804a-467f-aeb0-f0a2aab63c85" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419466 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc948a0e-80b8-4692-997e-7c034e6e0b26" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419472 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc948a0e-80b8-4692-997e-7c034e6e0b26" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419487 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4244588-a78b-401f-be2f-9d1c4f70fc40" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419493 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4244588-a78b-401f-be2f-9d1c4f70fc40" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419502 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068c01a0-347f-401a-bac0-b0e82bb04e7d" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419508 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="068c01a0-347f-401a-bac0-b0e82bb04e7d" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419519 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d764513-224d-4ccb-acc5-49f319acaa63" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419524 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d764513-224d-4ccb-acc5-49f319acaa63" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: E0217 00:45:20.419535 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419540 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419730 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4244588-a78b-401f-be2f-9d1c4f70fc40" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419741 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d764513-224d-4ccb-acc5-49f319acaa63" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419748 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc948a0e-80b8-4692-997e-7c034e6e0b26" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419757 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2828a3f7-804a-467f-aeb0-f0a2aab63c85" containerName="mariadb-database-create" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419765 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419773 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419787 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419796 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419807 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.419818 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="068c01a0-347f-401a-bac0-b0e82bb04e7d" containerName="mariadb-account-create-update" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.422374 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.424714 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.424759 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.424940 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-84zgj" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.442338 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2rpd"] Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.506361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.506440 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhbs\" (UniqueName: \"kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.506526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.506680 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.609304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.609714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhbs\" (UniqueName: \"kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.609747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.609773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.614689 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.615020 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.625156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.625819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhbs\" (UniqueName: \"kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs\") pod \"nova-cell0-conductor-db-sync-l2rpd\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.740503 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.830127 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" path="/var/lib/kubelet/pods/ddd72c63-70cf-4c86-8fab-be57a13993f3/volumes" Feb 17 00:45:20 crc kubenswrapper[4805]: I0217 00:45:20.830893 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" path="/var/lib/kubelet/pods/fb384fc5-09b9-47e4-9ed0-06d7330e6abf/volumes" Feb 17 00:45:21 crc kubenswrapper[4805]: I0217 00:45:21.287881 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2rpd"] Feb 17 00:45:22 crc kubenswrapper[4805]: I0217 00:45:22.226647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" event={"ID":"2e200cb5-e7c9-416c-857b-71caf2b00de3","Type":"ContainerStarted","Data":"6aeab6291a2349dbd18a2b3002507c021d54ec6a7ae14004a7d247d7aea0e290"} Feb 17 00:45:24 crc kubenswrapper[4805]: I0217 00:45:24.758828 4805 trace.go:236] Trace[150537745]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (17-Feb-2026 00:45:22.251) (total time: 2507ms): Feb 17 00:45:24 crc kubenswrapper[4805]: Trace[150537745]: [2.507156829s] [2.507156829s] END Feb 17 00:45:24 crc kubenswrapper[4805]: I0217 00:45:24.942287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b4c598ff7-vv75x" Feb 17 00:45:25 crc kubenswrapper[4805]: I0217 00:45:25.027712 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:45:25 crc kubenswrapper[4805]: I0217 00:45:25.027909 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-9c44689dd-p9ww5" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" containerID="cri-o://acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" gracePeriod=60 Feb 17 00:45:27 crc kubenswrapper[4805]: E0217 00:45:27.625260 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:27 crc kubenswrapper[4805]: E0217 00:45:27.628280 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:27 crc kubenswrapper[4805]: E0217 00:45:27.629613 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:27 crc kubenswrapper[4805]: E0217 00:45:27.629663 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9c44689dd-p9ww5" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" Feb 17 00:45:36 crc kubenswrapper[4805]: I0217 00:45:36.195980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" event={"ID":"2e200cb5-e7c9-416c-857b-71caf2b00de3","Type":"ContainerStarted","Data":"8a8e3a3cf7f7794e6e3728588ef954dc866e44e0f7dd4b062f7071342adcca5c"} Feb 17 00:45:36 crc kubenswrapper[4805]: I0217 00:45:36.229184 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" podStartSLOduration=1.914159373 podStartE2EDuration="16.229165603s" podCreationTimestamp="2026-02-17 00:45:20 +0000 UTC" firstStartedPulling="2026-02-17 00:45:21.286449736 +0000 UTC m=+1347.302259134" lastFinishedPulling="2026-02-17 00:45:35.601455966 +0000 UTC m=+1361.617265364" observedRunningTime="2026-02-17 00:45:36.215707529 +0000 UTC m=+1362.231516927" watchObservedRunningTime="2026-02-17 00:45:36.229165603 +0000 UTC m=+1362.244975001" Feb 17 00:45:37 crc kubenswrapper[4805]: E0217 00:45:37.625424 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:37 crc kubenswrapper[4805]: E0217 00:45:37.628366 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:37 crc kubenswrapper[4805]: E0217 00:45:37.629905 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 17 00:45:37 crc kubenswrapper[4805]: E0217 00:45:37.630000 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9c44689dd-p9ww5" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.207394 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.835145 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.962098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tq2d\" (UniqueName: \"kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d\") pod \"8cc03862-2ea6-4041-badb-7902bc29fb9f\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.962180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom\") pod \"8cc03862-2ea6-4041-badb-7902bc29fb9f\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.962285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data\") pod \"8cc03862-2ea6-4041-badb-7902bc29fb9f\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.962347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle\") pod \"8cc03862-2ea6-4041-badb-7902bc29fb9f\" (UID: \"8cc03862-2ea6-4041-badb-7902bc29fb9f\") " Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.968134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d" (OuterVolumeSpecName: "kube-api-access-7tq2d") pod "8cc03862-2ea6-4041-badb-7902bc29fb9f" (UID: "8cc03862-2ea6-4041-badb-7902bc29fb9f"). InnerVolumeSpecName "kube-api-access-7tq2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:39 crc kubenswrapper[4805]: I0217 00:45:39.969859 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8cc03862-2ea6-4041-badb-7902bc29fb9f" (UID: "8cc03862-2ea6-4041-badb-7902bc29fb9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.019530 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cc03862-2ea6-4041-badb-7902bc29fb9f" (UID: "8cc03862-2ea6-4041-badb-7902bc29fb9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.036552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data" (OuterVolumeSpecName: "config-data") pod "8cc03862-2ea6-4041-badb-7902bc29fb9f" (UID: "8cc03862-2ea6-4041-badb-7902bc29fb9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.064621 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tq2d\" (UniqueName: \"kubernetes.io/projected/8cc03862-2ea6-4041-badb-7902bc29fb9f-kube-api-access-7tq2d\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.064654 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.064664 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.064674 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc03862-2ea6-4041-badb-7902bc29fb9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.215766 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.216126 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="proxy-httpd" containerID="cri-o://527cba162d34f309854b97cef664358bf83b64e5e60f0b67d2c0cf23072f4bff" gracePeriod=30 Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.216279 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="sg-core" containerID="cri-o://889c20424d7d2fa30eb7ac1d79ada04ba4e086f8d03b7e5d5202d82dc32ec1b1" gracePeriod=30 Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.216417 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-notification-agent" containerID="cri-o://0efb05d73511cd236030ff091d24159d1903744b24cc8d9677024977b7658b5b" gracePeriod=30 Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.216092 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-central-agent" containerID="cri-o://4fb0b35b0566673b5817b35e38ab7392afb0dc13ddec9f45478d9dee05941f35" gracePeriod=30 Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.237796 4805 generic.go:334] "Generic (PLEG): container finished" podID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" exitCode=0 Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.237868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c44689dd-p9ww5" event={"ID":"8cc03862-2ea6-4041-badb-7902bc29fb9f","Type":"ContainerDied","Data":"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad"} Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.237899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9c44689dd-p9ww5" event={"ID":"8cc03862-2ea6-4041-badb-7902bc29fb9f","Type":"ContainerDied","Data":"98b0cea9c38787d7759ae6578ce355af2a822fe3f5bf551574868bf4f81d6fcc"} Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.237919 4805 scope.go:117] "RemoveContainer" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.238061 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9c44689dd-p9ww5" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.266835 4805 scope.go:117] "RemoveContainer" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.276248 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:45:40 crc kubenswrapper[4805]: E0217 00:45:40.277385 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad\": container with ID starting with acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad not found: ID does not exist" containerID="acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.277421 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad"} err="failed to get container status \"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad\": rpc error: code = NotFound desc = could not find container \"acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad\": container with ID starting with acf22f9a084cec7dc53e3a0115469aac0485a3ae93c4b9b6af0c7d27c14790ad not found: ID does not exist" Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.285249 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-9c44689dd-p9ww5"] Feb 17 00:45:40 crc kubenswrapper[4805]: I0217 00:45:40.797288 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" path="/var/lib/kubelet/pods/8cc03862-2ea6-4041-badb-7902bc29fb9f/volumes" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272115 4805 generic.go:334] "Generic (PLEG): container finished" podID="5d95b51c-8931-443e-a499-c7164a006372" containerID="527cba162d34f309854b97cef664358bf83b64e5e60f0b67d2c0cf23072f4bff" exitCode=0 Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272149 4805 generic.go:334] "Generic (PLEG): container finished" podID="5d95b51c-8931-443e-a499-c7164a006372" containerID="889c20424d7d2fa30eb7ac1d79ada04ba4e086f8d03b7e5d5202d82dc32ec1b1" exitCode=2 Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272158 4805 generic.go:334] "Generic (PLEG): container finished" podID="5d95b51c-8931-443e-a499-c7164a006372" containerID="0efb05d73511cd236030ff091d24159d1903744b24cc8d9677024977b7658b5b" exitCode=0 Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272167 4805 generic.go:334] "Generic (PLEG): container finished" podID="5d95b51c-8931-443e-a499-c7164a006372" containerID="4fb0b35b0566673b5817b35e38ab7392afb0dc13ddec9f45478d9dee05941f35" exitCode=0 Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerDied","Data":"527cba162d34f309854b97cef664358bf83b64e5e60f0b67d2c0cf23072f4bff"} Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerDied","Data":"889c20424d7d2fa30eb7ac1d79ada04ba4e086f8d03b7e5d5202d82dc32ec1b1"} Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerDied","Data":"0efb05d73511cd236030ff091d24159d1903744b24cc8d9677024977b7658b5b"} Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.272276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerDied","Data":"4fb0b35b0566673b5817b35e38ab7392afb0dc13ddec9f45478d9dee05941f35"} Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.494122 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.502875 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.502940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnjb9\" (UniqueName: \"kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503307 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503396 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data\") pod \"5d95b51c-8931-443e-a499-c7164a006372\" (UID: \"5d95b51c-8931-443e-a499-c7164a006372\") " Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.503940 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.504193 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.508930 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts" (OuterVolumeSpecName: "scripts") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.509020 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9" (OuterVolumeSpecName: "kube-api-access-vnjb9") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "kube-api-access-vnjb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.551549 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.607349 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.607388 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d95b51c-8931-443e-a499-c7164a006372-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.607402 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.607417 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnjb9\" (UniqueName: \"kubernetes.io/projected/5d95b51c-8931-443e-a499-c7164a006372-kube-api-access-vnjb9\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.653450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.704334 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data" (OuterVolumeSpecName: "config-data") pod "5d95b51c-8931-443e-a499-c7164a006372" (UID: "5d95b51c-8931-443e-a499-c7164a006372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.708786 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:41 crc kubenswrapper[4805]: I0217 00:45:41.708820 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d95b51c-8931-443e-a499-c7164a006372-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.285805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d95b51c-8931-443e-a499-c7164a006372","Type":"ContainerDied","Data":"6233fafdeff89ca95995afc301f08a780ba6f4184279e9ca422edd4435749189"} Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.285850 4805 scope.go:117] "RemoveContainer" containerID="527cba162d34f309854b97cef664358bf83b64e5e60f0b67d2c0cf23072f4bff" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.285948 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.317744 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.320077 4805 scope.go:117] "RemoveContainer" containerID="889c20424d7d2fa30eb7ac1d79ada04ba4e086f8d03b7e5d5202d82dc32ec1b1" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.326352 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.346558 4805 scope.go:117] "RemoveContainer" containerID="0efb05d73511cd236030ff091d24159d1903744b24cc8d9677024977b7658b5b" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348227 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348631 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348647 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb384fc5-09b9-47e4-9ed0-06d7330e6abf" containerName="heat-api" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348657 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="proxy-httpd" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348664 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="proxy-httpd" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348675 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348681 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd72c63-70cf-4c86-8fab-be57a13993f3" containerName="heat-cfnapi" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348687 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-notification-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348693 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-notification-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348711 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="sg-core" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348717 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="sg-core" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348744 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-central-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348750 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-central-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: E0217 00:45:42.348758 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348765 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348973 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-central-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348986 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="ceilometer-notification-agent" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.348997 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="proxy-httpd" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.349007 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d95b51c-8931-443e-a499-c7164a006372" containerName="sg-core" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.349026 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc03862-2ea6-4041-badb-7902bc29fb9f" containerName="heat-engine" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.350653 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.352969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.353175 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.364580 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.388820 4805 scope.go:117] "RemoveContainer" containerID="4fb0b35b0566673b5817b35e38ab7392afb0dc13ddec9f45478d9dee05941f35" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7dq\" (UniqueName: \"kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.419923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.420031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.521966 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.522005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7dq\" (UniqueName: \"kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.522755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.522764 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.526037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.526461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.527424 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.528420 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.540579 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7dq\" (UniqueName: \"kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq\") pod \"ceilometer-0\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.671228 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:45:42 crc kubenswrapper[4805]: I0217 00:45:42.798437 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d95b51c-8931-443e-a499-c7164a006372" path="/var/lib/kubelet/pods/5d95b51c-8931-443e-a499-c7164a006372/volumes" Feb 17 00:45:43 crc kubenswrapper[4805]: I0217 00:45:43.236056 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:45:43 crc kubenswrapper[4805]: I0217 00:45:43.296148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerStarted","Data":"040fa14b358e918f72385e686ab6c1743febddaa1097b33e607b008b69dfbc5f"} Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.314830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerStarted","Data":"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2"} Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.863195 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-t5rdz"] Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.865016 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.877962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t5rdz"] Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.948970 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-a86f-account-create-update-bhfpf"] Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.950415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.954515 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.970476 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a86f-account-create-update-bhfpf"] Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.993143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.993296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxgd\" (UniqueName: \"kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.993502 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnb2c\" (UniqueName: \"kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:44 crc kubenswrapper[4805]: I0217 00:45:44.993597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.096701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnb2c\" (UniqueName: \"kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.096768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.096869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.096916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxgd\" (UniqueName: \"kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.097617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.097790 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.130342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxgd\" (UniqueName: \"kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd\") pod \"aodh-a86f-account-create-update-bhfpf\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.130672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnb2c\" (UniqueName: \"kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c\") pod \"aodh-db-create-t5rdz\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.198200 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.277668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.345090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerStarted","Data":"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f"} Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.748536 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-t5rdz"] Feb 17 00:45:45 crc kubenswrapper[4805]: I0217 00:45:45.858717 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-a86f-account-create-update-bhfpf"] Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.354546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerStarted","Data":"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce"} Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.355621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t5rdz" event={"ID":"1b9acd80-9e5b-4608-89e4-24ec65d4740e","Type":"ContainerStarted","Data":"67ead1324c592bb2f9282dd9c7338d7c0af707b13e79925de758fc59f823933a"} Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.355647 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t5rdz" event={"ID":"1b9acd80-9e5b-4608-89e4-24ec65d4740e","Type":"ContainerStarted","Data":"9303f74722eb8ce5aa8007ad902e072b0240c5adf5092773239a361e83dc98c6"} Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.358260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a86f-account-create-update-bhfpf" event={"ID":"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8","Type":"ContainerStarted","Data":"8410fe2ae9be1281827b99be50277ebb72bd084c8b661a3b72db40b46851bc94"} Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.358292 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a86f-account-create-update-bhfpf" event={"ID":"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8","Type":"ContainerStarted","Data":"7c09b84618f5c009919c7c60b8db419694ea912d72554abc3c8d5f7b4a16611b"} Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.378822 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-t5rdz" podStartSLOduration=2.378803866 podStartE2EDuration="2.378803866s" podCreationTimestamp="2026-02-17 00:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:46.36887871 +0000 UTC m=+1372.384688108" watchObservedRunningTime="2026-02-17 00:45:46.378803866 +0000 UTC m=+1372.394613264" Feb 17 00:45:46 crc kubenswrapper[4805]: I0217 00:45:46.385165 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-a86f-account-create-update-bhfpf" podStartSLOduration=2.385142703 podStartE2EDuration="2.385142703s" podCreationTimestamp="2026-02-17 00:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:46.381289855 +0000 UTC m=+1372.397099253" watchObservedRunningTime="2026-02-17 00:45:46.385142703 +0000 UTC m=+1372.400952101" Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.371597 4805 generic.go:334] "Generic (PLEG): container finished" podID="e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" containerID="8410fe2ae9be1281827b99be50277ebb72bd084c8b661a3b72db40b46851bc94" exitCode=0 Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.371655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a86f-account-create-update-bhfpf" event={"ID":"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8","Type":"ContainerDied","Data":"8410fe2ae9be1281827b99be50277ebb72bd084c8b661a3b72db40b46851bc94"} Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.377241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerStarted","Data":"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6"} Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.377563 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.379192 4805 generic.go:334] "Generic (PLEG): container finished" podID="1b9acd80-9e5b-4608-89e4-24ec65d4740e" containerID="67ead1324c592bb2f9282dd9c7338d7c0af707b13e79925de758fc59f823933a" exitCode=0 Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.379251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t5rdz" event={"ID":"1b9acd80-9e5b-4608-89e4-24ec65d4740e","Type":"ContainerDied","Data":"67ead1324c592bb2f9282dd9c7338d7c0af707b13e79925de758fc59f823933a"} Feb 17 00:45:47 crc kubenswrapper[4805]: I0217 00:45:47.445380 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.59325305 podStartE2EDuration="5.445354536s" podCreationTimestamp="2026-02-17 00:45:42 +0000 UTC" firstStartedPulling="2026-02-17 00:45:43.238395475 +0000 UTC m=+1369.254204873" lastFinishedPulling="2026-02-17 00:45:47.090496951 +0000 UTC m=+1373.106306359" observedRunningTime="2026-02-17 00:45:47.433047624 +0000 UTC m=+1373.448857022" watchObservedRunningTime="2026-02-17 00:45:47.445354536 +0000 UTC m=+1373.461163934" Feb 17 00:45:48 crc kubenswrapper[4805]: I0217 00:45:48.397451 4805 generic.go:334] "Generic (PLEG): container finished" podID="2e200cb5-e7c9-416c-857b-71caf2b00de3" containerID="8a8e3a3cf7f7794e6e3728588ef954dc866e44e0f7dd4b062f7071342adcca5c" exitCode=0 Feb 17 00:45:48 crc kubenswrapper[4805]: I0217 00:45:48.397514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" event={"ID":"2e200cb5-e7c9-416c-857b-71caf2b00de3","Type":"ContainerDied","Data":"8a8e3a3cf7f7794e6e3728588ef954dc866e44e0f7dd4b062f7071342adcca5c"} Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.004013 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.026646 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.092976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnb2c\" (UniqueName: \"kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c\") pod \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.093189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts\") pod \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\" (UID: \"1b9acd80-9e5b-4608-89e4-24ec65d4740e\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.093305 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts\") pod \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.093555 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxgd\" (UniqueName: \"kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd\") pod \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\" (UID: \"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.094036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b9acd80-9e5b-4608-89e4-24ec65d4740e" (UID: "1b9acd80-9e5b-4608-89e4-24ec65d4740e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.094164 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9acd80-9e5b-4608-89e4-24ec65d4740e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.094640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" (UID: "e2174d96-6433-4a4d-9f5a-ebd2f9088bd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.101108 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd" (OuterVolumeSpecName: "kube-api-access-fxxgd") pod "e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" (UID: "e2174d96-6433-4a4d-9f5a-ebd2f9088bd8"). InnerVolumeSpecName "kube-api-access-fxxgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.104634 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c" (OuterVolumeSpecName: "kube-api-access-tnb2c") pod "1b9acd80-9e5b-4608-89e4-24ec65d4740e" (UID: "1b9acd80-9e5b-4608-89e4-24ec65d4740e"). InnerVolumeSpecName "kube-api-access-tnb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.204981 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxxgd\" (UniqueName: \"kubernetes.io/projected/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-kube-api-access-fxxgd\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.205019 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnb2c\" (UniqueName: \"kubernetes.io/projected/1b9acd80-9e5b-4608-89e4-24ec65d4740e-kube-api-access-tnb2c\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.205029 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.418049 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-t5rdz" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.418798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-t5rdz" event={"ID":"1b9acd80-9e5b-4608-89e4-24ec65d4740e","Type":"ContainerDied","Data":"9303f74722eb8ce5aa8007ad902e072b0240c5adf5092773239a361e83dc98c6"} Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.418847 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9303f74722eb8ce5aa8007ad902e072b0240c5adf5092773239a361e83dc98c6" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.425109 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-a86f-account-create-update-bhfpf" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.425032 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-a86f-account-create-update-bhfpf" event={"ID":"e2174d96-6433-4a4d-9f5a-ebd2f9088bd8","Type":"ContainerDied","Data":"7c09b84618f5c009919c7c60b8db419694ea912d72554abc3c8d5f7b4a16611b"} Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.426732 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c09b84618f5c009919c7c60b8db419694ea912d72554abc3c8d5f7b4a16611b" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.855746 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.918801 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data\") pod \"2e200cb5-e7c9-416c-857b-71caf2b00de3\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.918885 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle\") pod \"2e200cb5-e7c9-416c-857b-71caf2b00de3\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.919018 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts\") pod \"2e200cb5-e7c9-416c-857b-71caf2b00de3\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.919067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czhbs\" (UniqueName: \"kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs\") pod \"2e200cb5-e7c9-416c-857b-71caf2b00de3\" (UID: \"2e200cb5-e7c9-416c-857b-71caf2b00de3\") " Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.924113 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs" (OuterVolumeSpecName: "kube-api-access-czhbs") pod "2e200cb5-e7c9-416c-857b-71caf2b00de3" (UID: "2e200cb5-e7c9-416c-857b-71caf2b00de3"). InnerVolumeSpecName "kube-api-access-czhbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.934756 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts" (OuterVolumeSpecName: "scripts") pod "2e200cb5-e7c9-416c-857b-71caf2b00de3" (UID: "2e200cb5-e7c9-416c-857b-71caf2b00de3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.960587 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e200cb5-e7c9-416c-857b-71caf2b00de3" (UID: "2e200cb5-e7c9-416c-857b-71caf2b00de3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:49 crc kubenswrapper[4805]: I0217 00:45:49.961171 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data" (OuterVolumeSpecName: "config-data") pod "2e200cb5-e7c9-416c-857b-71caf2b00de3" (UID: "2e200cb5-e7c9-416c-857b-71caf2b00de3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.021404 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.021445 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.021459 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e200cb5-e7c9-416c-857b-71caf2b00de3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.021470 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czhbs\" (UniqueName: \"kubernetes.io/projected/2e200cb5-e7c9-416c-857b-71caf2b00de3-kube-api-access-czhbs\") on node \"crc\" DevicePath \"\"" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338043 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rb4bb"] Feb 17 00:45:50 crc kubenswrapper[4805]: E0217 00:45:50.338580 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e200cb5-e7c9-416c-857b-71caf2b00de3" containerName="nova-cell0-conductor-db-sync" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338597 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e200cb5-e7c9-416c-857b-71caf2b00de3" containerName="nova-cell0-conductor-db-sync" Feb 17 00:45:50 crc kubenswrapper[4805]: E0217 00:45:50.338616 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" containerName="mariadb-account-create-update" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338623 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" containerName="mariadb-account-create-update" Feb 17 00:45:50 crc kubenswrapper[4805]: E0217 00:45:50.338642 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9acd80-9e5b-4608-89e4-24ec65d4740e" containerName="mariadb-database-create" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338648 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9acd80-9e5b-4608-89e4-24ec65d4740e" containerName="mariadb-database-create" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338812 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" containerName="mariadb-account-create-update" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338829 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9acd80-9e5b-4608-89e4-24ec65d4740e" containerName="mariadb-database-create" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.338845 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e200cb5-e7c9-416c-857b-71caf2b00de3" containerName="nova-cell0-conductor-db-sync" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.339523 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.348243 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.348372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-drlz8" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.348506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.352355 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.375508 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rb4bb"] Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.438929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" event={"ID":"2e200cb5-e7c9-416c-857b-71caf2b00de3","Type":"ContainerDied","Data":"6aeab6291a2349dbd18a2b3002507c021d54ec6a7ae14004a7d247d7aea0e290"} Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.438961 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aeab6291a2349dbd18a2b3002507c021d54ec6a7ae14004a7d247d7aea0e290" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.438999 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-l2rpd" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.514753 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.516238 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.518580 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.518803 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-84zgj" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.537996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538081 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfzb\" (UniqueName: \"kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538134 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538186 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538207 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlfg\" (UniqueName: \"kubernetes.io/projected/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-kube-api-access-zjlfg\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.538315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.639686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlfg\" (UniqueName: \"kubernetes.io/projected/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-kube-api-access-zjlfg\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.640309 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfzb\" (UniqueName: \"kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.643395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.644012 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.662126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlfg\" (UniqueName: \"kubernetes.io/projected/97b23e4f-706d-470f-9b61-ea4e1a3ec9c7-kube-api-access-zjlfg\") pod \"nova-cell0-conductor-0\" (UID: \"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7\") " pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.673818 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.673926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.674015 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfzb\" (UniqueName: \"kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.674290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data\") pod \"aodh-db-sync-rb4bb\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.862011 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:50 crc kubenswrapper[4805]: I0217 00:45:50.955656 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:45:51 crc kubenswrapper[4805]: I0217 00:45:51.373722 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 00:45:51 crc kubenswrapper[4805]: I0217 00:45:51.455063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7","Type":"ContainerStarted","Data":"e790540c6d3ce63aadf7ca702ed402700e052143e04d08dd953d7b3e08cd559a"} Feb 17 00:45:51 crc kubenswrapper[4805]: I0217 00:45:51.536343 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rb4bb"] Feb 17 00:45:52 crc kubenswrapper[4805]: I0217 00:45:52.464974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rb4bb" event={"ID":"10155d2c-a497-44a7-9cbd-c7023421781f","Type":"ContainerStarted","Data":"818ba15cd1572958010165efe03058d551727754df8eca0052eb25a8424ade35"} Feb 17 00:45:52 crc kubenswrapper[4805]: I0217 00:45:52.467263 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"97b23e4f-706d-470f-9b61-ea4e1a3ec9c7","Type":"ContainerStarted","Data":"d81e342b2837f9d034b7621af0a6d6da3f85997550954481dac00debd7a34e51"} Feb 17 00:45:52 crc kubenswrapper[4805]: I0217 00:45:52.467361 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 00:45:52 crc kubenswrapper[4805]: I0217 00:45:52.494732 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.494710489 podStartE2EDuration="2.494710489s" podCreationTimestamp="2026-02-17 00:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:45:52.481749038 +0000 UTC m=+1378.497558446" watchObservedRunningTime="2026-02-17 00:45:52.494710489 +0000 UTC m=+1378.510519897" Feb 17 00:45:55 crc kubenswrapper[4805]: I0217 00:45:55.676441 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 00:45:56 crc kubenswrapper[4805]: I0217 00:45:56.529943 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rb4bb" event={"ID":"10155d2c-a497-44a7-9cbd-c7023421781f","Type":"ContainerStarted","Data":"09de379b8f5db063b29dd5eab57f4e0d9c4565882e5a42d14afd344c6835f6ec"} Feb 17 00:45:56 crc kubenswrapper[4805]: I0217 00:45:56.553245 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rb4bb" podStartSLOduration=2.416960356 podStartE2EDuration="6.553222839s" podCreationTimestamp="2026-02-17 00:45:50 +0000 UTC" firstStartedPulling="2026-02-17 00:45:51.536994418 +0000 UTC m=+1377.552803806" lastFinishedPulling="2026-02-17 00:45:55.673256901 +0000 UTC m=+1381.689066289" observedRunningTime="2026-02-17 00:45:56.551697316 +0000 UTC m=+1382.567506794" watchObservedRunningTime="2026-02-17 00:45:56.553222839 +0000 UTC m=+1382.569032247" Feb 17 00:45:58 crc kubenswrapper[4805]: I0217 00:45:58.558844 4805 generic.go:334] "Generic (PLEG): container finished" podID="10155d2c-a497-44a7-9cbd-c7023421781f" containerID="09de379b8f5db063b29dd5eab57f4e0d9c4565882e5a42d14afd344c6835f6ec" exitCode=0 Feb 17 00:45:58 crc kubenswrapper[4805]: I0217 00:45:58.558927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rb4bb" event={"ID":"10155d2c-a497-44a7-9cbd-c7023421781f","Type":"ContainerDied","Data":"09de379b8f5db063b29dd5eab57f4e0d9c4565882e5a42d14afd344c6835f6ec"} Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.107107 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.265677 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle\") pod \"10155d2c-a497-44a7-9cbd-c7023421781f\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.265883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts\") pod \"10155d2c-a497-44a7-9cbd-c7023421781f\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.265991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfzb\" (UniqueName: \"kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb\") pod \"10155d2c-a497-44a7-9cbd-c7023421781f\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.266046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data\") pod \"10155d2c-a497-44a7-9cbd-c7023421781f\" (UID: \"10155d2c-a497-44a7-9cbd-c7023421781f\") " Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.271544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts" (OuterVolumeSpecName: "scripts") pod "10155d2c-a497-44a7-9cbd-c7023421781f" (UID: "10155d2c-a497-44a7-9cbd-c7023421781f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.278719 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb" (OuterVolumeSpecName: "kube-api-access-lwfzb") pod "10155d2c-a497-44a7-9cbd-c7023421781f" (UID: "10155d2c-a497-44a7-9cbd-c7023421781f"). InnerVolumeSpecName "kube-api-access-lwfzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.317704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data" (OuterVolumeSpecName: "config-data") pod "10155d2c-a497-44a7-9cbd-c7023421781f" (UID: "10155d2c-a497-44a7-9cbd-c7023421781f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.318293 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10155d2c-a497-44a7-9cbd-c7023421781f" (UID: "10155d2c-a497-44a7-9cbd-c7023421781f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.369505 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.369566 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.369587 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfzb\" (UniqueName: \"kubernetes.io/projected/10155d2c-a497-44a7-9cbd-c7023421781f-kube-api-access-lwfzb\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.369609 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10155d2c-a497-44a7-9cbd-c7023421781f-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.583424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rb4bb" event={"ID":"10155d2c-a497-44a7-9cbd-c7023421781f","Type":"ContainerDied","Data":"818ba15cd1572958010165efe03058d551727754df8eca0052eb25a8424ade35"} Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.583465 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818ba15cd1572958010165efe03058d551727754df8eca0052eb25a8424ade35" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.583514 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rb4bb" Feb 17 00:46:00 crc kubenswrapper[4805]: I0217 00:46:00.890913 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.427526 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7b5v8"] Feb 17 00:46:01 crc kubenswrapper[4805]: E0217 00:46:01.428006 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10155d2c-a497-44a7-9cbd-c7023421781f" containerName="aodh-db-sync" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.428022 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="10155d2c-a497-44a7-9cbd-c7023421781f" containerName="aodh-db-sync" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.428284 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="10155d2c-a497-44a7-9cbd-c7023421781f" containerName="aodh-db-sync" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.429137 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.431787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.431990 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.450076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7b5v8"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.597014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.597295 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbfr\" (UniqueName: \"kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.597389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.597571 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.631877 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.633746 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.644394 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.655093 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.656412 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.664687 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.667273 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.674280 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.676605 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.678691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-drlz8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.678829 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.679020 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.683470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.704304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.704394 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.704463 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbbfr\" (UniqueName: \"kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.704485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.707533 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.708845 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.713618 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.720030 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.735010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.736243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.743862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbbfr\" (UniqueName: \"kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr\") pod \"nova-cell0-cell-mapping-7b5v8\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.747475 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.749715 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.756427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.805964 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806469 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbnq\" (UniqueName: \"kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4knq\" (UniqueName: \"kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806565 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806610 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806645 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.806798 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8x2m\" (UniqueName: \"kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.877692 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.910915 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.913868 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.918366 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929359 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8x2m\" (UniqueName: \"kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpbnq\" (UniqueName: \"kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929589 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4knq\" (UniqueName: \"kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.929685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951500 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951645 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951729 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.951930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5dbf\" (UniqueName: \"kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.954129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.955317 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.972309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.979805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:01 crc kubenswrapper[4805]: I0217 00:46:01.985431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4knq\" (UniqueName: \"kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq\") pod \"aodh-0\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " pod="openstack/aodh-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:01.999781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpbnq\" (UniqueName: \"kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq\") pod \"nova-cell1-novncproxy-0\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.020683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.021813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.024449 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8x2m\" (UniqueName: \"kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.024488 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.025970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " pod="openstack/nova-api-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.026342 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.058887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.058955 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.058994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5dbf\" (UniqueName: \"kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.059091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.059118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfl9c\" (UniqueName: \"kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.059139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.059198 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.059869 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.062959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.063657 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.082784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5dbf\" (UniqueName: \"kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf\") pod \"nova-scheduler-0\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160526 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mqg\" (UniqueName: \"kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfl9c\" (UniqueName: \"kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160807 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.160898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.165343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.165766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.167081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.181938 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfl9c\" (UniqueName: \"kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c\") pod \"nova-metadata-0\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.207606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.249170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.262883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mqg\" (UniqueName: \"kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.263188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.263246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.263288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.263407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.263554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.264614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.264775 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.264777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.265184 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.265732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.269775 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.297306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mqg\" (UniqueName: \"kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg\") pod \"dnsmasq-dns-9b86998b5-gs54b\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.315854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.339019 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.360864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:02 crc kubenswrapper[4805]: I0217 00:46:02.552086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7b5v8"] Feb 17 00:46:02 crc kubenswrapper[4805]: W0217 00:46:02.612640 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2db1e4_4262_4a81_83fe_a9b9f0565beb.slice/crio-b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00 WatchSource:0}: Error finding container b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00: Status 404 returned error can't find the container with id b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00 Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.041312 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:03 crc kubenswrapper[4805]: W0217 00:46:03.155107 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914d4f54_76f7_402b_b453_b5badec5d1bb.slice/crio-f5a69131569b4104572f8c3805e62182635286c403e6702f3a822dfb15a52e6f WatchSource:0}: Error finding container f5a69131569b4104572f8c3805e62182635286c403e6702f3a822dfb15a52e6f: Status 404 returned error can't find the container with id f5a69131569b4104572f8c3805e62182635286c403e6702f3a822dfb15a52e6f Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.159667 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.499195 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.513523 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.524859 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.533709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:03 crc kubenswrapper[4805]: W0217 00:46:03.541668 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7da63b3_96f0_46ef_8ff4_e5ec29821564.slice/crio-daff57d5e4bc336963aef196ebe0eef20fc01d06caa539e457da219aaa247add WatchSource:0}: Error finding container daff57d5e4bc336963aef196ebe0eef20fc01d06caa539e457da219aaa247add: Status 404 returned error can't find the container with id daff57d5e4bc336963aef196ebe0eef20fc01d06caa539e457da219aaa247add Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.651024 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"faf65c97-4fea-4e54-a6d2-847c03970bf5","Type":"ContainerStarted","Data":"9b1c751f14d7972e52be61054ca748f7aac60973bfc2c5c5c6e6da221bfba6ca"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.663104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerStarted","Data":"f5a69131569b4104572f8c3805e62182635286c403e6702f3a822dfb15a52e6f"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.667303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" event={"ID":"e7da63b3-96f0-46ef-8ff4-e5ec29821564","Type":"ContainerStarted","Data":"daff57d5e4bc336963aef196ebe0eef20fc01d06caa539e457da219aaa247add"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.671761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerStarted","Data":"8fd5abc603a1f46ab17b6c0731ae2157b226e73f1547a10e5a5a4e1b90abae54"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.676480 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7b5v8" event={"ID":"1c2db1e4-4262-4a81-83fe-a9b9f0565beb","Type":"ContainerStarted","Data":"49fb302df7845bc1cad0e323dac46a516f0ac83b0d976718a49d4d4a0252f981"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.676536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7b5v8" event={"ID":"1c2db1e4-4262-4a81-83fe-a9b9f0565beb","Type":"ContainerStarted","Data":"b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.683113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5230e8c-2abe-4835-8fed-ad359b0f52a2","Type":"ContainerStarted","Data":"91b0c6fab66c199fd69f7b81075838d01226df633217c1da60e92256821fcff6"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.688995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerStarted","Data":"9ec9a415143c3ec18c3dda6fd949ab6674e257c4f6be82dce30c1bc643c5d9b4"} Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.691417 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jbwzz"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.692897 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.696833 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.697013 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.734454 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jbwzz"] Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.740953 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7b5v8" podStartSLOduration=2.740935447 podStartE2EDuration="2.740935447s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:03.689421534 +0000 UTC m=+1389.705230932" watchObservedRunningTime="2026-02-17 00:46:03.740935447 +0000 UTC m=+1389.756744845" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.814377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.814455 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.814537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.814657 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jjmp\" (UniqueName: \"kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.916929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.917268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jjmp\" (UniqueName: \"kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.917339 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.917392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.938271 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.938840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jjmp\" (UniqueName: \"kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.939483 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:03 crc kubenswrapper[4805]: I0217 00:46:03.959180 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts\") pod \"nova-cell1-conductor-db-sync-jbwzz\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:04 crc kubenswrapper[4805]: I0217 00:46:04.150339 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:04 crc kubenswrapper[4805]: I0217 00:46:04.763939 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jbwzz"] Feb 17 00:46:04 crc kubenswrapper[4805]: I0217 00:46:04.769060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerStarted","Data":"647f5e61f4fad824e69b8e3b7b72a9a15e50feb1eef3fc00d642c22c0a441735"} Feb 17 00:46:04 crc kubenswrapper[4805]: I0217 00:46:04.775310 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerID="d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9" exitCode=0 Feb 17 00:46:04 crc kubenswrapper[4805]: I0217 00:46:04.777425 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" event={"ID":"e7da63b3-96f0-46ef-8ff4-e5ec29821564","Type":"ContainerDied","Data":"d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9"} Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.795089 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" event={"ID":"3de33584-3604-4b64-ae95-9d18066a35a6","Type":"ContainerStarted","Data":"5f0cd60d7fbd48c58c1edeb30fe4192e14f9dd1277a35fa9a671b5eb210a3f7d"} Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.795613 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" event={"ID":"3de33584-3604-4b64-ae95-9d18066a35a6","Type":"ContainerStarted","Data":"d20c5df2a216d9e5fc022f848d18a586c019febf0ff0074312c642d809c68ee4"} Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.798983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" event={"ID":"e7da63b3-96f0-46ef-8ff4-e5ec29821564","Type":"ContainerStarted","Data":"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a"} Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.799108 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.838969 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" podStartSLOduration=2.8389468300000003 podStartE2EDuration="2.83894683s" podCreationTimestamp="2026-02-17 00:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:05.813952334 +0000 UTC m=+1391.829761722" watchObservedRunningTime="2026-02-17 00:46:05.83894683 +0000 UTC m=+1391.854756228" Feb 17 00:46:05 crc kubenswrapper[4805]: I0217 00:46:05.842097 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" podStartSLOduration=4.8420797570000005 podStartE2EDuration="4.842079757s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:05.828288253 +0000 UTC m=+1391.844097651" watchObservedRunningTime="2026-02-17 00:46:05.842079757 +0000 UTC m=+1391.857889155" Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.588972 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.607053 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.626317 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.626693 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-central-agent" containerID="cri-o://f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2" gracePeriod=30 Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.626758 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="proxy-httpd" containerID="cri-o://f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6" gracePeriod=30 Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.626800 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="sg-core" containerID="cri-o://6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce" gracePeriod=30 Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.626838 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-notification-agent" containerID="cri-o://f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f" gracePeriod=30 Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.633497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.827086 4805 generic.go:334] "Generic (PLEG): container finished" podID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerID="6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce" exitCode=2 Feb 17 00:46:06 crc kubenswrapper[4805]: I0217 00:46:06.827144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerDied","Data":"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce"} Feb 17 00:46:07 crc kubenswrapper[4805]: I0217 00:46:07.842866 4805 generic.go:334] "Generic (PLEG): container finished" podID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerID="f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6" exitCode=0 Feb 17 00:46:07 crc kubenswrapper[4805]: I0217 00:46:07.843152 4805 generic.go:334] "Generic (PLEG): container finished" podID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerID="f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2" exitCode=0 Feb 17 00:46:07 crc kubenswrapper[4805]: I0217 00:46:07.842940 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerDied","Data":"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6"} Feb 17 00:46:07 crc kubenswrapper[4805]: I0217 00:46:07.843191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerDied","Data":"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.879955 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5230e8c-2abe-4835-8fed-ad359b0f52a2","Type":"ContainerStarted","Data":"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.881268 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62" gracePeriod=30 Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.893214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerStarted","Data":"a7fc104b13a61ca30d228b916e9cc02cbeac806fba93b10b699905f257aa980c"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.893298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerStarted","Data":"80578d35c641fad864cf0031303ea4abdd06e08984e0ea5e927c4b00981f6267"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.893459 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-log" containerID="cri-o://80578d35c641fad864cf0031303ea4abdd06e08984e0ea5e927c4b00981f6267" gracePeriod=30 Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.893707 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-metadata" containerID="cri-o://a7fc104b13a61ca30d228b916e9cc02cbeac806fba93b10b699905f257aa980c" gracePeriod=30 Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.905765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"faf65c97-4fea-4e54-a6d2-847c03970bf5","Type":"ContainerStarted","Data":"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.917904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerStarted","Data":"937bd9e1f51a14bbe70f454cd781f8b5df6908f9362bac05282f4b16be8c02a4"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.917947 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerStarted","Data":"7d42044813be7120c6c614bcc9cf97b18880ae87fe0e45589a057334b702dedd"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.931200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerStarted","Data":"ca50c322fbc6cc974342d4fd9cc9184d3b3addce0e501fa53060ca27d9ddcce6"} Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.946581 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.969159797 podStartE2EDuration="8.946562186s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="2026-02-17 00:46:03.522511919 +0000 UTC m=+1389.538321317" lastFinishedPulling="2026-02-17 00:46:08.499914308 +0000 UTC m=+1394.515723706" observedRunningTime="2026-02-17 00:46:09.938868762 +0000 UTC m=+1395.954678160" watchObservedRunningTime="2026-02-17 00:46:09.946562186 +0000 UTC m=+1395.962371584" Feb 17 00:46:09 crc kubenswrapper[4805]: I0217 00:46:09.980101 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.997754342 podStartE2EDuration="8.980080048s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="2026-02-17 00:46:03.507485871 +0000 UTC m=+1389.523295269" lastFinishedPulling="2026-02-17 00:46:08.489811577 +0000 UTC m=+1394.505620975" observedRunningTime="2026-02-17 00:46:09.976654883 +0000 UTC m=+1395.992464281" watchObservedRunningTime="2026-02-17 00:46:09.980080048 +0000 UTC m=+1395.995889436" Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.018804 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.046838987 podStartE2EDuration="9.018786055s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="2026-02-17 00:46:03.517130299 +0000 UTC m=+1389.532939697" lastFinishedPulling="2026-02-17 00:46:08.489077357 +0000 UTC m=+1394.504886765" observedRunningTime="2026-02-17 00:46:09.999434297 +0000 UTC m=+1396.015243695" watchObservedRunningTime="2026-02-17 00:46:10.018786055 +0000 UTC m=+1396.034595453" Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.024437 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.692954329 podStartE2EDuration="9.024390461s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="2026-02-17 00:46:03.157491691 +0000 UTC m=+1389.173301089" lastFinishedPulling="2026-02-17 00:46:08.488927823 +0000 UTC m=+1394.504737221" observedRunningTime="2026-02-17 00:46:10.017730356 +0000 UTC m=+1396.033539754" watchObservedRunningTime="2026-02-17 00:46:10.024390461 +0000 UTC m=+1396.040199859" Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.941745 4805 generic.go:334] "Generic (PLEG): container finished" podID="c2860336-e1cb-448e-b21a-fa982c89be62" containerID="a7fc104b13a61ca30d228b916e9cc02cbeac806fba93b10b699905f257aa980c" exitCode=0 Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.942008 4805 generic.go:334] "Generic (PLEG): container finished" podID="c2860336-e1cb-448e-b21a-fa982c89be62" containerID="80578d35c641fad864cf0031303ea4abdd06e08984e0ea5e927c4b00981f6267" exitCode=143 Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.942634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerDied","Data":"a7fc104b13a61ca30d228b916e9cc02cbeac806fba93b10b699905f257aa980c"} Feb 17 00:46:10 crc kubenswrapper[4805]: I0217 00:46:10.942691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerDied","Data":"80578d35c641fad864cf0031303ea4abdd06e08984e0ea5e927c4b00981f6267"} Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.774182 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.821142 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data\") pod \"c2860336-e1cb-448e-b21a-fa982c89be62\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.821293 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle\") pod \"c2860336-e1cb-448e-b21a-fa982c89be62\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.821456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs\") pod \"c2860336-e1cb-448e-b21a-fa982c89be62\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.821529 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfl9c\" (UniqueName: \"kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c\") pod \"c2860336-e1cb-448e-b21a-fa982c89be62\" (UID: \"c2860336-e1cb-448e-b21a-fa982c89be62\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.822197 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs" (OuterVolumeSpecName: "logs") pod "c2860336-e1cb-448e-b21a-fa982c89be62" (UID: "c2860336-e1cb-448e-b21a-fa982c89be62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.838075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c" (OuterVolumeSpecName: "kube-api-access-mfl9c") pod "c2860336-e1cb-448e-b21a-fa982c89be62" (UID: "c2860336-e1cb-448e-b21a-fa982c89be62"). InnerVolumeSpecName "kube-api-access-mfl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.846627 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.854542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2860336-e1cb-448e-b21a-fa982c89be62" (UID: "c2860336-e1cb-448e-b21a-fa982c89be62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.863804 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data" (OuterVolumeSpecName: "config-data") pod "c2860336-e1cb-448e-b21a-fa982c89be62" (UID: "c2860336-e1cb-448e-b21a-fa982c89be62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.923543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.923770 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.923892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.923964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.924178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.924247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.924412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7dq\" (UniqueName: \"kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq\") pod \"659e36cd-77e2-4f47-b7cd-b74591b47b74\" (UID: \"659e36cd-77e2-4f47-b7cd-b74591b47b74\") " Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.924894 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.924960 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2860336-e1cb-448e-b21a-fa982c89be62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.925022 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2860336-e1cb-448e-b21a-fa982c89be62-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.925082 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfl9c\" (UniqueName: \"kubernetes.io/projected/c2860336-e1cb-448e-b21a-fa982c89be62-kube-api-access-mfl9c\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.925749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.926369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.936488 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq" (OuterVolumeSpecName: "kube-api-access-zz7dq") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "kube-api-access-zz7dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.947424 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts" (OuterVolumeSpecName: "scripts") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.955969 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2860336-e1cb-448e-b21a-fa982c89be62","Type":"ContainerDied","Data":"9ec9a415143c3ec18c3dda6fd949ab6674e257c4f6be82dce30c1bc643c5d9b4"} Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.956054 4805 scope.go:117] "RemoveContainer" containerID="a7fc104b13a61ca30d228b916e9cc02cbeac806fba93b10b699905f257aa980c" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.956207 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.962447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerStarted","Data":"077f9413eaf07761963bd4c8ed1ede34469ab546d77b384a73809a839c13820e"} Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.969614 4805 generic.go:334] "Generic (PLEG): container finished" podID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerID="f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f" exitCode=0 Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.969665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerDied","Data":"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f"} Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.969697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"659e36cd-77e2-4f47-b7cd-b74591b47b74","Type":"ContainerDied","Data":"040fa14b358e918f72385e686ab6c1743febddaa1097b33e607b008b69dfbc5f"} Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.969774 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.985237 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:11 crc kubenswrapper[4805]: I0217 00:46:11.996290 4805 scope.go:117] "RemoveContainer" containerID="80578d35c641fad864cf0031303ea4abdd06e08984e0ea5e927c4b00981f6267" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.013721 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.026623 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.027976 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7dq\" (UniqueName: \"kubernetes.io/projected/659e36cd-77e2-4f47-b7cd-b74591b47b74-kube-api-access-zz7dq\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.027998 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.028010 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.028018 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.028027 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/659e36cd-77e2-4f47-b7cd-b74591b47b74-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.049001 4805 scope.go:117] "RemoveContainer" containerID="f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076305 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076627 4805 scope.go:117] "RemoveContainer" containerID="6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076796 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-metadata" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076809 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-metadata" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076823 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-notification-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076829 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-notification-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="sg-core" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076861 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="sg-core" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076874 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-central-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076880 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-central-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076896 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="proxy-httpd" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076903 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="proxy-httpd" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.076915 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-log" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.076921 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-log" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077101 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-log" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077112 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-notification-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077124 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="ceilometer-central-agent" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077134 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="proxy-httpd" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077142 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" containerName="sg-core" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.077164 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" containerName="nova-metadata-metadata" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.078287 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.080257 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.081378 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.090452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.092005 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.101914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data" (OuterVolumeSpecName: "config-data") pod "659e36cd-77e2-4f47-b7cd-b74591b47b74" (UID: "659e36cd-77e2-4f47-b7cd-b74591b47b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129698 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129833 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129870 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp44r\" (UniqueName: \"kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129987 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.129999 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/659e36cd-77e2-4f47-b7cd-b74591b47b74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.145732 4805 scope.go:117] "RemoveContainer" containerID="f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.169797 4805 scope.go:117] "RemoveContainer" containerID="f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.187624 4805 scope.go:117] "RemoveContainer" containerID="f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.188036 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6\": container with ID starting with f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6 not found: ID does not exist" containerID="f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188073 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6"} err="failed to get container status \"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6\": rpc error: code = NotFound desc = could not find container \"f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6\": container with ID starting with f74a2c2ce02d224018de160a86bc0e6d5b6e472ce6afb03df92e1bd4a57fb3a6 not found: ID does not exist" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188105 4805 scope.go:117] "RemoveContainer" containerID="6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.188464 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce\": container with ID starting with 6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce not found: ID does not exist" containerID="6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188493 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce"} err="failed to get container status \"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce\": rpc error: code = NotFound desc = could not find container \"6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce\": container with ID starting with 6026a82c63ec17765c21a7bacb183ba35b39a287f01ebc743eb6de78bf648cce not found: ID does not exist" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188518 4805 scope.go:117] "RemoveContainer" containerID="f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.188700 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f\": container with ID starting with f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f not found: ID does not exist" containerID="f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188723 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f"} err="failed to get container status \"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f\": rpc error: code = NotFound desc = could not find container \"f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f\": container with ID starting with f62a923c6da9ee0bb3ca57bf85a921fb4ea0df08bf2b58aa8a31f0f4b091706f not found: ID does not exist" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188736 4805 scope.go:117] "RemoveContainer" containerID="f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2" Feb 17 00:46:12 crc kubenswrapper[4805]: E0217 00:46:12.188929 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2\": container with ID starting with f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2 not found: ID does not exist" containerID="f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.188953 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2"} err="failed to get container status \"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2\": rpc error: code = NotFound desc = could not find container \"f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2\": container with ID starting with f083108ffea4c6a1b873fee6a9a82f048bbbb393e26357e2f63e4e963d1b35a2 not found: ID does not exist" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232335 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp44r\" (UniqueName: \"kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.232833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.236901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.236909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.237829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.249205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp44r\" (UniqueName: \"kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r\") pod \"nova-metadata-0\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.250218 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.250245 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.270861 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.316125 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.316175 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.348107 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.363524 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.412235 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.446866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.449391 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.498273 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.500821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.507957 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.508191 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.536735 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbt2\" (UniqueName: \"kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538174 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538259 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.538371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.585714 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.585960 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="dnsmasq-dns" containerID="cri-o://6fcc98eb6f4388ff0020ffe754be43dce70a15dfa47a38ab0ea36dcfa8c19fed" gracePeriod=10 Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.675875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.675930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.675958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.676029 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.676093 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghbt2\" (UniqueName: \"kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.676142 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.676181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.687161 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.687835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.688228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.689971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.703603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghbt2\" (UniqueName: \"kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.705130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.708816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts\") pod \"ceilometer-0\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.764158 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.775587 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.850505 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659e36cd-77e2-4f47-b7cd-b74591b47b74" path="/var/lib/kubelet/pods/659e36cd-77e2-4f47-b7cd-b74591b47b74/volumes" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.857057 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2860336-e1cb-448e-b21a-fa982c89be62" path="/var/lib/kubelet/pods/c2860336-e1cb-448e-b21a-fa982c89be62/volumes" Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.987472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7b5v8" event={"ID":"1c2db1e4-4262-4a81-83fe-a9b9f0565beb","Type":"ContainerDied","Data":"49fb302df7845bc1cad0e323dac46a516f0ac83b0d976718a49d4d4a0252f981"} Feb 17 00:46:12 crc kubenswrapper[4805]: I0217 00:46:12.987450 4805 generic.go:334] "Generic (PLEG): container finished" podID="1c2db1e4-4262-4a81-83fe-a9b9f0565beb" containerID="49fb302df7845bc1cad0e323dac46a516f0ac83b0d976718a49d4d4a0252f981" exitCode=0 Feb 17 00:46:13 crc kubenswrapper[4805]: I0217 00:46:13.284519 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 00:46:13 crc kubenswrapper[4805]: I0217 00:46:13.331539 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:13 crc kubenswrapper[4805]: I0217 00:46:13.331571 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:13 crc kubenswrapper[4805]: I0217 00:46:13.873613 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:13 crc kubenswrapper[4805]: I0217 00:46:13.904952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.027277 4805 generic.go:334] "Generic (PLEG): container finished" podID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerID="6fcc98eb6f4388ff0020ffe754be43dce70a15dfa47a38ab0ea36dcfa8c19fed" exitCode=0 Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.027404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" event={"ID":"2557baf3-efbc-4e37-bb54-e3b55b097025","Type":"ContainerDied","Data":"6fcc98eb6f4388ff0020ffe754be43dce70a15dfa47a38ab0ea36dcfa8c19fed"} Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.481118 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.646909 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.647196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.647361 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.647416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.647482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.647508 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnz4s\" (UniqueName: \"kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s\") pod \"2557baf3-efbc-4e37-bb54-e3b55b097025\" (UID: \"2557baf3-efbc-4e37-bb54-e3b55b097025\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.659141 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s" (OuterVolumeSpecName: "kube-api-access-cnz4s") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "kube-api-access-cnz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.675163 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.750409 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnz4s\" (UniqueName: \"kubernetes.io/projected/2557baf3-efbc-4e37-bb54-e3b55b097025-kube-api-access-cnz4s\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.758710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config" (OuterVolumeSpecName: "config") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.761954 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.768410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.778744 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.856955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbfr\" (UniqueName: \"kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr\") pod \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857193 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle\") pod \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857257 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data\") pod \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857308 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts\") pod \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\" (UID: \"1c2db1e4-4262-4a81-83fe-a9b9f0565beb\") " Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857744 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857759 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857767 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.857778 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.875445 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr" (OuterVolumeSpecName: "kube-api-access-cbbfr") pod "1c2db1e4-4262-4a81-83fe-a9b9f0565beb" (UID: "1c2db1e4-4262-4a81-83fe-a9b9f0565beb"). InnerVolumeSpecName "kube-api-access-cbbfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.875948 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts" (OuterVolumeSpecName: "scripts") pod "1c2db1e4-4262-4a81-83fe-a9b9f0565beb" (UID: "1c2db1e4-4262-4a81-83fe-a9b9f0565beb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.947076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2557baf3-efbc-4e37-bb54-e3b55b097025" (UID: "2557baf3-efbc-4e37-bb54-e3b55b097025"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.960250 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2557baf3-efbc-4e37-bb54-e3b55b097025-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.960274 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:14 crc kubenswrapper[4805]: I0217 00:46:14.960285 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbbfr\" (UniqueName: \"kubernetes.io/projected/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-kube-api-access-cbbfr\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.033581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c2db1e4-4262-4a81-83fe-a9b9f0565beb" (UID: "1c2db1e4-4262-4a81-83fe-a9b9f0565beb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.045148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerStarted","Data":"5c2998b368f6c6c7d2310182533854d1c9b9e9b5b940377d87e74682ecd07824"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.045443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data" (OuterVolumeSpecName: "config-data") pod "1c2db1e4-4262-4a81-83fe-a9b9f0565beb" (UID: "1c2db1e4-4262-4a81-83fe-a9b9f0565beb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.051014 4805 generic.go:334] "Generic (PLEG): container finished" podID="3de33584-3604-4b64-ae95-9d18066a35a6" containerID="5f0cd60d7fbd48c58c1edeb30fe4192e14f9dd1277a35fa9a671b5eb210a3f7d" exitCode=0 Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.051090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" event={"ID":"3de33584-3604-4b64-ae95-9d18066a35a6","Type":"ContainerDied","Data":"5f0cd60d7fbd48c58c1edeb30fe4192e14f9dd1277a35fa9a671b5eb210a3f7d"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.056262 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7b5v8" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.056446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7b5v8" event={"ID":"1c2db1e4-4262-4a81-83fe-a9b9f0565beb","Type":"ContainerDied","Data":"b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.056496 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69de45c4d9f83577bb0361fe09355927751c3399be2bba01181323512c4cf00" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.061626 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.061654 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c2db1e4-4262-4a81-83fe-a9b9f0565beb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.072443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerStarted","Data":"4e4ff1dcea7aeca3bacf3eeb68257afe6dd1851b958146170c3d45876c1b4aac"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.078538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerStarted","Data":"f838df9915baf461ff6f626de60e4d78a2231d34e47e458da0346be485602e9c"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.083480 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" event={"ID":"2557baf3-efbc-4e37-bb54-e3b55b097025","Type":"ContainerDied","Data":"f64de87027dd508e6df92d18a4d82289e67b38f0c327e53b566cd59970ae0297"} Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.083511 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-h5drq" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.083540 4805 scope.go:117] "RemoveContainer" containerID="6fcc98eb6f4388ff0020ffe754be43dce70a15dfa47a38ab0ea36dcfa8c19fed" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.117811 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.614612791 podStartE2EDuration="14.117790359s" podCreationTimestamp="2026-02-17 00:46:01 +0000 UTC" firstStartedPulling="2026-02-17 00:46:03.034970631 +0000 UTC m=+1389.050780029" lastFinishedPulling="2026-02-17 00:46:14.538148199 +0000 UTC m=+1400.553957597" observedRunningTime="2026-02-17 00:46:15.09915333 +0000 UTC m=+1401.114962728" watchObservedRunningTime="2026-02-17 00:46:15.117790359 +0000 UTC m=+1401.133599757" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.145610 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.151140 4805 scope.go:117] "RemoveContainer" containerID="5a9fe183d0abab2af57291061528cc05da8a451502bdb016fd2410e9e9190375" Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.178571 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-h5drq"] Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.198412 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.265666 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.265869 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-log" containerID="cri-o://7d42044813be7120c6c614bcc9cf97b18880ae87fe0e45589a057334b702dedd" gracePeriod=30 Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.267702 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-api" containerID="cri-o://937bd9e1f51a14bbe70f454cd781f8b5df6908f9362bac05282f4b16be8c02a4" gracePeriod=30 Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.290690 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.290839 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="faf65c97-4fea-4e54-a6d2-847c03970bf5" containerName="nova-scheduler-scheduler" containerID="cri-o://fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916" gracePeriod=30 Feb 17 00:46:15 crc kubenswrapper[4805]: I0217 00:46:15.321316 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.068548 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.073581 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" containerName="kube-state-metrics" containerID="cri-o://15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8" gracePeriod=30 Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.111875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerStarted","Data":"8d36ab16607ad1c9dd9ac4efe5539a1e8707a2a723db9b6c678a04c5388efdca"} Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.111916 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerStarted","Data":"fb14ba424999982e591effe3afaac3826e21b73611dc4b273dcf2a7f9c9bbd2c"} Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.113315 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerStarted","Data":"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a"} Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.113353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerStarted","Data":"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b"} Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.113473 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-log" containerID="cri-o://c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" gracePeriod=30 Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.113949 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-metadata" containerID="cri-o://22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" gracePeriod=30 Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.119995 4805 generic.go:334] "Generic (PLEG): container finished" podID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerID="7d42044813be7120c6c614bcc9cf97b18880ae87fe0e45589a057334b702dedd" exitCode=143 Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.120053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerDied","Data":"7d42044813be7120c6c614bcc9cf97b18880ae87fe0e45589a057334b702dedd"} Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.147161 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.147144304 podStartE2EDuration="4.147144304s" podCreationTimestamp="2026-02-17 00:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:16.138777671 +0000 UTC m=+1402.154587069" watchObservedRunningTime="2026-02-17 00:46:16.147144304 +0000 UTC m=+1402.162953702" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.212718 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.213098 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" containerName="mysqld-exporter" containerID="cri-o://0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198" gracePeriod=30 Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.689436 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.783117 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.799101 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" path="/var/lib/kubelet/pods/2557baf3-efbc-4e37-bb54-e3b55b097025/volumes" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.812710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c67k\" (UniqueName: \"kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k\") pod \"1c79f087-7a87-405e-8a91-8450f22de65d\" (UID: \"1c79f087-7a87-405e-8a91-8450f22de65d\") " Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.812804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jjmp\" (UniqueName: \"kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp\") pod \"3de33584-3604-4b64-ae95-9d18066a35a6\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.812863 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts\") pod \"3de33584-3604-4b64-ae95-9d18066a35a6\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.812884 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data\") pod \"3de33584-3604-4b64-ae95-9d18066a35a6\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.812907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle\") pod \"3de33584-3604-4b64-ae95-9d18066a35a6\" (UID: \"3de33584-3604-4b64-ae95-9d18066a35a6\") " Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.827498 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp" (OuterVolumeSpecName: "kube-api-access-2jjmp") pod "3de33584-3604-4b64-ae95-9d18066a35a6" (UID: "3de33584-3604-4b64-ae95-9d18066a35a6"). InnerVolumeSpecName "kube-api-access-2jjmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.827652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k" (OuterVolumeSpecName: "kube-api-access-4c67k") pod "1c79f087-7a87-405e-8a91-8450f22de65d" (UID: "1c79f087-7a87-405e-8a91-8450f22de65d"). InnerVolumeSpecName "kube-api-access-4c67k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.833357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts" (OuterVolumeSpecName: "scripts") pod "3de33584-3604-4b64-ae95-9d18066a35a6" (UID: "3de33584-3604-4b64-ae95-9d18066a35a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.857588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data" (OuterVolumeSpecName: "config-data") pod "3de33584-3604-4b64-ae95-9d18066a35a6" (UID: "3de33584-3604-4b64-ae95-9d18066a35a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.905218 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de33584-3604-4b64-ae95-9d18066a35a6" (UID: "3de33584-3604-4b64-ae95-9d18066a35a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.924819 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c67k\" (UniqueName: \"kubernetes.io/projected/1c79f087-7a87-405e-8a91-8450f22de65d-kube-api-access-4c67k\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.924853 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jjmp\" (UniqueName: \"kubernetes.io/projected/3de33584-3604-4b64-ae95-9d18066a35a6-kube-api-access-2jjmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.924862 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.924870 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.924881 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de33584-3604-4b64-ae95-9d18066a35a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.946082 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.987680 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:16 crc kubenswrapper[4805]: I0217 00:46:16.997755 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.127918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle\") pod \"faf65c97-4fea-4e54-a6d2-847c03970bf5\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp44r\" (UniqueName: \"kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r\") pod \"a661bdf7-dfb8-4413-8caa-e674549bc204\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128070 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data\") pod \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128111 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5dbf\" (UniqueName: \"kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf\") pod \"faf65c97-4fea-4e54-a6d2-847c03970bf5\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128137 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle\") pod \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128162 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs\") pod \"a661bdf7-dfb8-4413-8caa-e674549bc204\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle\") pod \"a661bdf7-dfb8-4413-8caa-e674549bc204\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128253 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data\") pod \"faf65c97-4fea-4e54-a6d2-847c03970bf5\" (UID: \"faf65c97-4fea-4e54-a6d2-847c03970bf5\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data\") pod \"a661bdf7-dfb8-4413-8caa-e674549bc204\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128409 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs\") pod \"a661bdf7-dfb8-4413-8caa-e674549bc204\" (UID: \"a661bdf7-dfb8-4413-8caa-e674549bc204\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.128439 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvx8l\" (UniqueName: \"kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l\") pod \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\" (UID: \"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c\") " Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.132835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs" (OuterVolumeSpecName: "logs") pod "a661bdf7-dfb8-4413-8caa-e674549bc204" (UID: "a661bdf7-dfb8-4413-8caa-e674549bc204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.136521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r" (OuterVolumeSpecName: "kube-api-access-cp44r") pod "a661bdf7-dfb8-4413-8caa-e674549bc204" (UID: "a661bdf7-dfb8-4413-8caa-e674549bc204"). InnerVolumeSpecName "kube-api-access-cp44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.169091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l" (OuterVolumeSpecName: "kube-api-access-xvx8l") pod "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" (UID: "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c"). InnerVolumeSpecName "kube-api-access-xvx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.170450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf" (OuterVolumeSpecName: "kube-api-access-w5dbf") pod "faf65c97-4fea-4e54-a6d2-847c03970bf5" (UID: "faf65c97-4fea-4e54-a6d2-847c03970bf5"). InnerVolumeSpecName "kube-api-access-w5dbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.206687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a661bdf7-dfb8-4413-8caa-e674549bc204" (UID: "a661bdf7-dfb8-4413-8caa-e674549bc204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207703 4805 generic.go:334] "Generic (PLEG): container finished" podID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerID="22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" exitCode=0 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207759 4805 generic.go:334] "Generic (PLEG): container finished" podID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerID="c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" exitCode=143 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerDied","Data":"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerDied","Data":"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a661bdf7-dfb8-4413-8caa-e674549bc204","Type":"ContainerDied","Data":"4e4ff1dcea7aeca3bacf3eeb68257afe6dd1851b958146170c3d45876c1b4aac"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.207948 4805 scope.go:117] "RemoveContainer" containerID="22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.208186 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225384 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2db1e4-4262-4a81-83fe-a9b9f0565beb" containerName="nova-manage" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225870 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2db1e4-4262-4a81-83fe-a9b9f0565beb" containerName="nova-manage" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225880 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" containerName="kube-state-metrics" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225886 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" containerName="kube-state-metrics" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225896 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="dnsmasq-dns" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225902 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="dnsmasq-dns" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225913 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="init" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225919 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="init" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225927 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-log" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225933 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-log" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225943 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf65c97-4fea-4e54-a6d2-847c03970bf5" containerName="nova-scheduler-scheduler" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225949 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf65c97-4fea-4e54-a6d2-847c03970bf5" containerName="nova-scheduler-scheduler" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225966 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-metadata" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225972 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-metadata" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.225984 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de33584-3604-4b64-ae95-9d18066a35a6" containerName="nova-cell1-conductor-db-sync" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.225991 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de33584-3604-4b64-ae95-9d18066a35a6" containerName="nova-cell1-conductor-db-sync" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.226002 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" containerName="mysqld-exporter" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226009 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" containerName="mysqld-exporter" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226194 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" containerName="mysqld-exporter" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226207 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf65c97-4fea-4e54-a6d2-847c03970bf5" containerName="nova-scheduler-scheduler" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226222 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" containerName="kube-state-metrics" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226234 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-metadata" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226249 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2557baf3-efbc-4e37-bb54-e3b55b097025" containerName="dnsmasq-dns" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226259 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de33584-3604-4b64-ae95-9d18066a35a6" containerName="nova-cell1-conductor-db-sync" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226273 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2db1e4-4262-4a81-83fe-a9b9f0565beb" containerName="nova-manage" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226280 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" containerName="nova-metadata-log" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.226957 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.230700 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp44r\" (UniqueName: \"kubernetes.io/projected/a661bdf7-dfb8-4413-8caa-e674549bc204-kube-api-access-cp44r\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.230723 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5dbf\" (UniqueName: \"kubernetes.io/projected/faf65c97-4fea-4e54-a6d2-847c03970bf5-kube-api-access-w5dbf\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.230731 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a661bdf7-dfb8-4413-8caa-e674549bc204-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.230741 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.230749 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvx8l\" (UniqueName: \"kubernetes.io/projected/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-kube-api-access-xvx8l\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.248205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" event={"ID":"3de33584-3604-4b64-ae95-9d18066a35a6","Type":"ContainerDied","Data":"d20c5df2a216d9e5fc022f848d18a586c019febf0ff0074312c642d809c68ee4"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.248236 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20c5df2a216d9e5fc022f848d18a586c019febf0ff0074312c642d809c68ee4" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.248289 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jbwzz" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.310700 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.315525 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" (UID: "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.321601 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faf65c97-4fea-4e54-a6d2-847c03970bf5" (UID: "faf65c97-4fea-4e54-a6d2-847c03970bf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.330433 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data" (OuterVolumeSpecName: "config-data") pod "faf65c97-4fea-4e54-a6d2-847c03970bf5" (UID: "faf65c97-4fea-4e54-a6d2-847c03970bf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332560 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrhg\" (UniqueName: \"kubernetes.io/projected/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-kube-api-access-rvrhg\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332584 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332752 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332768 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.332777 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf65c97-4fea-4e54-a6d2-847c03970bf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.338892 4805 generic.go:334] "Generic (PLEG): container finished" podID="1c79f087-7a87-405e-8a91-8450f22de65d" containerID="15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8" exitCode=2 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.338944 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c79f087-7a87-405e-8a91-8450f22de65d","Type":"ContainerDied","Data":"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.338966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1c79f087-7a87-405e-8a91-8450f22de65d","Type":"ContainerDied","Data":"cd029041cb1c03b2678f6f6a1a8b65e377894148ff84abf8d8d5308f15453286"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.339011 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.352529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data" (OuterVolumeSpecName: "config-data") pod "a661bdf7-dfb8-4413-8caa-e674549bc204" (UID: "a661bdf7-dfb8-4413-8caa-e674549bc204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.355121 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" containerID="0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198" exitCode=2 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.355532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c","Type":"ContainerDied","Data":"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.355619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c","Type":"ContainerDied","Data":"fe3f306321f49d570a41d74679604d49c2b9e621cc545ea150d66223ae8ad0f7"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.355847 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.360089 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerStarted","Data":"a8e46b87df65342fcf5bb8a857c34e1d40d0ebb8f0827f7adcf8d023e0df672b"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.362300 4805 scope.go:117] "RemoveContainer" containerID="c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.362460 4805 generic.go:334] "Generic (PLEG): container finished" podID="faf65c97-4fea-4e54-a6d2-847c03970bf5" containerID="fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916" exitCode=0 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.362636 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-api" containerID="cri-o://647f5e61f4fad824e69b8e3b7b72a9a15e50feb1eef3fc00d642c22c0a441735" gracePeriod=30 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.362879 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.366036 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-notifier" containerID="cri-o://077f9413eaf07761963bd4c8ed1ede34469ab546d77b384a73809a839c13820e" gracePeriod=30 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.366144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"faf65c97-4fea-4e54-a6d2-847c03970bf5","Type":"ContainerDied","Data":"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.366172 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"faf65c97-4fea-4e54-a6d2-847c03970bf5","Type":"ContainerDied","Data":"9b1c751f14d7972e52be61054ca748f7aac60973bfc2c5c5c6e6da221bfba6ca"} Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.366188 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-listener" containerID="cri-o://f838df9915baf461ff6f626de60e4d78a2231d34e47e458da0346be485602e9c" gracePeriod=30 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.366235 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-evaluator" containerID="cri-o://ca50c322fbc6cc974342d4fd9cc9184d3b3addce0e501fa53060ca27d9ddcce6" gracePeriod=30 Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.415494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a661bdf7-dfb8-4413-8caa-e674549bc204" (UID: "a661bdf7-dfb8-4413-8caa-e674549bc204"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.415851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data" (OuterVolumeSpecName: "config-data") pod "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" (UID: "2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrhg\" (UniqueName: \"kubernetes.io/projected/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-kube-api-access-rvrhg\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434733 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434744 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.434754 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a661bdf7-dfb8-4413-8caa-e674549bc204-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.437566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.441271 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.462470 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrhg\" (UniqueName: \"kubernetes.io/projected/fe0a1ae2-2057-4b54-b01d-ca8bafe09be3-kube-api-access-rvrhg\") pod \"nova-cell1-conductor-0\" (UID: \"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3\") " pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.583498 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.607607 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.636791 4805 scope.go:117] "RemoveContainer" containerID="22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.636935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.638208 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b\": container with ID starting with 22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b not found: ID does not exist" containerID="22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.638241 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b"} err="failed to get container status \"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b\": rpc error: code = NotFound desc = could not find container \"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b\": container with ID starting with 22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.638264 4805 scope.go:117] "RemoveContainer" containerID="c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.640969 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a\": container with ID starting with c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a not found: ID does not exist" containerID="c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.641051 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a"} err="failed to get container status \"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a\": rpc error: code = NotFound desc = could not find container \"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a\": container with ID starting with c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.641068 4805 scope.go:117] "RemoveContainer" containerID="22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.645080 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b"} err="failed to get container status \"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b\": rpc error: code = NotFound desc = could not find container \"22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b\": container with ID starting with 22a279a3ef5e07d6ff3c06bd1308fccb4b36680befe3a7d4605c7e12820fb08b not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.645114 4805 scope.go:117] "RemoveContainer" containerID="c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.645952 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.646011 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a"} err="failed to get container status \"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a\": rpc error: code = NotFound desc = could not find container \"c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a\": container with ID starting with c6c7dce00df7b7bda3e72e6e2f06ba7c59a540a0dfa19362086d9da060344c9a not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.646038 4805 scope.go:117] "RemoveContainer" containerID="15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.660554 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.671782 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.673767 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.676046 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.676251 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.683773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.695390 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.699620 4805 scope.go:117] "RemoveContainer" containerID="15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.700378 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8\": container with ID starting with 15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8 not found: ID does not exist" containerID="15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.700406 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8"} err="failed to get container status \"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8\": rpc error: code = NotFound desc = could not find container \"15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8\": container with ID starting with 15e7fbecbf34554dddd7419acd503c333bfa13f763c5bca70619ae9ae79a61e8 not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.700425 4805 scope.go:117] "RemoveContainer" containerID="0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.707397 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.709170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.711732 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.712040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.721387 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.728364 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.740227 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.741640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.743283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.751397 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vtlw\" (UniqueName: \"kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6nq\" (UniqueName: \"kubernetes.io/projected/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-api-access-pq6nq\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770819 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770853 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.770893 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.771257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.771371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wzk\" (UniqueName: \"kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872025 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872066 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6nq\" (UniqueName: \"kubernetes.io/projected/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-api-access-pq6nq\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872229 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872309 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2wzk\" (UniqueName: \"kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.872391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vtlw\" (UniqueName: \"kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.875704 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.890104 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vtlw\" (UniqueName: \"kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.890490 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.895704 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.895862 4805 scope.go:117] "RemoveContainer" containerID="0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.896530 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.897900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.899078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.899462 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198\": container with ID starting with 0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198 not found: ID does not exist" containerID="0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.899493 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198"} err="failed to get container status \"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198\": rpc error: code = NotFound desc = could not find container \"0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198\": container with ID starting with 0cc8b38da7b06da14bfde0c6de19699b9e5356ac7ce4963a199ab21e85ebc198 not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.899562 4805 scope.go:117] "RemoveContainer" containerID="fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.900293 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.900915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.901306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2wzk\" (UniqueName: \"kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk\") pod \"nova-scheduler-0\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.910908 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.917053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6nq\" (UniqueName: \"kubernetes.io/projected/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-kube-api-access-pq6nq\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.917816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c8e81a5-b0c2-4a31-8383-8022fa10fe96-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7c8e81a5-b0c2-4a31-8383-8022fa10fe96\") " pod="openstack/kube-state-metrics-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.918858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " pod="openstack/nova-metadata-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.924088 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.960547 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.967205 4805 scope.go:117] "RemoveContainer" containerID="fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916" Feb 17 00:46:17 crc kubenswrapper[4805]: E0217 00:46:17.967834 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916\": container with ID starting with fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916 not found: ID does not exist" containerID="fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.967896 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916"} err="failed to get container status \"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916\": rpc error: code = NotFound desc = could not find container \"fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916\": container with ID starting with fa2626b7c42a8f9e996d5bf8f7bce488d7717cc5920346878aa26c7ecd0cb916 not found: ID does not exist" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.967864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.975862 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.977827 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 17 00:46:17 crc kubenswrapper[4805]: I0217 00:46:17.991833 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.079164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-config-data\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.079495 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt88m\" (UniqueName: \"kubernetes.io/projected/c841db68-4473-4305-91cc-75ec6f257ac0-kube-api-access-tt88m\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.080286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.080318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.208282 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.208344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.208471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-config-data\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.208520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt88m\" (UniqueName: \"kubernetes.io/projected/c841db68-4473-4305-91cc-75ec6f257ac0-kube-api-access-tt88m\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.213712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.215306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.215924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.223529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c841db68-4473-4305-91cc-75ec6f257ac0-config-data\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.224487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.228510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt88m\" (UniqueName: \"kubernetes.io/projected/c841db68-4473-4305-91cc-75ec6f257ac0-kube-api-access-tt88m\") pod \"mysqld-exporter-0\" (UID: \"c841db68-4473-4305-91cc-75ec6f257ac0\") " pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.278892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.297304 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 17 00:46:18 crc kubenswrapper[4805]: W0217 00:46:18.303148 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0a1ae2_2057_4b54_b01d_ca8bafe09be3.slice/crio-ecf4acaef74c8039af0aea6df24fb7da44d7769baf45e592bd86e3a2980f7348 WatchSource:0}: Error finding container ecf4acaef74c8039af0aea6df24fb7da44d7769baf45e592bd86e3a2980f7348: Status 404 returned error can't find the container with id ecf4acaef74c8039af0aea6df24fb7da44d7769baf45e592bd86e3a2980f7348 Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384233 4805 generic.go:334] "Generic (PLEG): container finished" podID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerID="f838df9915baf461ff6f626de60e4d78a2231d34e47e458da0346be485602e9c" exitCode=0 Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384539 4805 generic.go:334] "Generic (PLEG): container finished" podID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerID="ca50c322fbc6cc974342d4fd9cc9184d3b3addce0e501fa53060ca27d9ddcce6" exitCode=0 Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384549 4805 generic.go:334] "Generic (PLEG): container finished" podID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerID="647f5e61f4fad824e69b8e3b7b72a9a15e50feb1eef3fc00d642c22c0a441735" exitCode=0 Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerDied","Data":"f838df9915baf461ff6f626de60e4d78a2231d34e47e458da0346be485602e9c"} Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerDied","Data":"ca50c322fbc6cc974342d4fd9cc9184d3b3addce0e501fa53060ca27d9ddcce6"} Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.384632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerDied","Data":"647f5e61f4fad824e69b8e3b7b72a9a15e50feb1eef3fc00d642c22c0a441735"} Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.387717 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3","Type":"ContainerStarted","Data":"ecf4acaef74c8039af0aea6df24fb7da44d7769baf45e592bd86e3a2980f7348"} Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.530238 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.800338 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c79f087-7a87-405e-8a91-8450f22de65d" path="/var/lib/kubelet/pods/1c79f087-7a87-405e-8a91-8450f22de65d/volumes" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.801247 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c" path="/var/lib/kubelet/pods/2b81ed6f-cf86-4fc8-89e0-6cb03f628e0c/volumes" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.802609 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a661bdf7-dfb8-4413-8caa-e674549bc204" path="/var/lib/kubelet/pods/a661bdf7-dfb8-4413-8caa-e674549bc204/volumes" Feb 17 00:46:18 crc kubenswrapper[4805]: I0217 00:46:18.804552 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf65c97-4fea-4e54-a6d2-847c03970bf5" path="/var/lib/kubelet/pods/faf65c97-4fea-4e54-a6d2-847c03970bf5/volumes" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:18.852186 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 00:46:20 crc kubenswrapper[4805]: W0217 00:46:18.867120 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c8e81a5_b0c2_4a31_8383_8022fa10fe96.slice/crio-479d8bb5a25b503bbee3827863eea582146814400318734be97a7c3b795a2a89 WatchSource:0}: Error finding container 479d8bb5a25b503bbee3827863eea582146814400318734be97a7c3b795a2a89: Status 404 returned error can't find the container with id 479d8bb5a25b503bbee3827863eea582146814400318734be97a7c3b795a2a89 Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:18.877848 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.039997 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.400257 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"fe0a1ae2-2057-4b54-b01d-ca8bafe09be3","Type":"ContainerStarted","Data":"cfaa65b3d7509021997b67494671fdaa91774a38c01fb2cb7623927d668e9245"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.401639 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.407342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"c841db68-4473-4305-91cc-75ec6f257ac0","Type":"ContainerStarted","Data":"b7a152c0af404130e128d2f13b2a277c09b1022857d87d97bdb691c56d13fcb7"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.410354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerStarted","Data":"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.410383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerStarted","Data":"0710179c127e5f937765f54a52ff1542eb7c9a3cc31a0a0b6da15c12759cc759"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.420873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerStarted","Data":"ba40aac79ba247385ad0835055595ec4e5bd2f6b5927df7ad5c90d0ebf25350c"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.421024 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.423812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7c8e81a5-b0c2-4a31-8383-8022fa10fe96","Type":"ContainerStarted","Data":"479d8bb5a25b503bbee3827863eea582146814400318734be97a7c3b795a2a89"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.426817 4805 generic.go:334] "Generic (PLEG): container finished" podID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerID="077f9413eaf07761963bd4c8ed1ede34469ab546d77b384a73809a839c13820e" exitCode=0 Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.426870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerDied","Data":"077f9413eaf07761963bd4c8ed1ede34469ab546d77b384a73809a839c13820e"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.430447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2936e576-b736-4e51-af25-bf06d2959067","Type":"ContainerStarted","Data":"a6dbd8064ac6fddbdb937b04650ebd5dafbcb552c7d9dc7241156aaf34fae465"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.430469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2936e576-b736-4e51-af25-bf06d2959067","Type":"ContainerStarted","Data":"c82bb4497971f5ae06ad00c644850d06c22f3c41cc903275f2825b4d0313b0e2"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.432862 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.432847978 podStartE2EDuration="2.432847978s" podCreationTimestamp="2026-02-17 00:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:19.416829152 +0000 UTC m=+1405.432638550" watchObservedRunningTime="2026-02-17 00:46:19.432847978 +0000 UTC m=+1405.448657396" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.452979 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.954300209 podStartE2EDuration="7.452962387s" podCreationTimestamp="2026-02-17 00:46:12 +0000 UTC" firstStartedPulling="2026-02-17 00:46:14.391636492 +0000 UTC m=+1400.407445890" lastFinishedPulling="2026-02-17 00:46:18.89029867 +0000 UTC m=+1404.906108068" observedRunningTime="2026-02-17 00:46:19.445827069 +0000 UTC m=+1405.461636467" watchObservedRunningTime="2026-02-17 00:46:19.452962387 +0000 UTC m=+1405.468771795" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:19.462452 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4624372709999998 podStartE2EDuration="2.462437271s" podCreationTimestamp="2026-02-17 00:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:19.460415605 +0000 UTC m=+1405.476225003" watchObservedRunningTime="2026-02-17 00:46:19.462437271 +0000 UTC m=+1405.478246669" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.025012 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.460512 4805 generic.go:334] "Generic (PLEG): container finished" podID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerID="937bd9e1f51a14bbe70f454cd781f8b5df6908f9362bac05282f4b16be8c02a4" exitCode=0 Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.460575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerDied","Data":"937bd9e1f51a14bbe70f454cd781f8b5df6908f9362bac05282f4b16be8c02a4"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.468797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerStarted","Data":"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898"} Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.528208 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.528189889 podStartE2EDuration="3.528189889s" podCreationTimestamp="2026-02-17 00:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:20.508999675 +0000 UTC m=+1406.524809083" watchObservedRunningTime="2026-02-17 00:46:20.528189889 +0000 UTC m=+1406.543999287" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.805259 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.813981 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs\") pod \"914d4f54-76f7-402b-b453-b5badec5d1bb\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997307 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4knq\" (UniqueName: \"kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq\") pod \"c7f08d9c-83a5-4818-992b-904fb159ec36\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997389 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8x2m\" (UniqueName: \"kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m\") pod \"914d4f54-76f7-402b-b453-b5badec5d1bb\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data\") pod \"914d4f54-76f7-402b-b453-b5badec5d1bb\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle\") pod \"c7f08d9c-83a5-4818-992b-904fb159ec36\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle\") pod \"914d4f54-76f7-402b-b453-b5badec5d1bb\" (UID: \"914d4f54-76f7-402b-b453-b5badec5d1bb\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data\") pod \"c7f08d9c-83a5-4818-992b-904fb159ec36\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.997629 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts\") pod \"c7f08d9c-83a5-4818-992b-904fb159ec36\" (UID: \"c7f08d9c-83a5-4818-992b-904fb159ec36\") " Feb 17 00:46:20 crc kubenswrapper[4805]: I0217 00:46:20.998059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs" (OuterVolumeSpecName: "logs") pod "914d4f54-76f7-402b-b453-b5badec5d1bb" (UID: "914d4f54-76f7-402b-b453-b5badec5d1bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.002797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m" (OuterVolumeSpecName: "kube-api-access-n8x2m") pod "914d4f54-76f7-402b-b453-b5badec5d1bb" (UID: "914d4f54-76f7-402b-b453-b5badec5d1bb"). InnerVolumeSpecName "kube-api-access-n8x2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.002865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq" (OuterVolumeSpecName: "kube-api-access-b4knq") pod "c7f08d9c-83a5-4818-992b-904fb159ec36" (UID: "c7f08d9c-83a5-4818-992b-904fb159ec36"). InnerVolumeSpecName "kube-api-access-b4knq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.023526 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts" (OuterVolumeSpecName: "scripts") pod "c7f08d9c-83a5-4818-992b-904fb159ec36" (UID: "c7f08d9c-83a5-4818-992b-904fb159ec36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.048030 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "914d4f54-76f7-402b-b453-b5badec5d1bb" (UID: "914d4f54-76f7-402b-b453-b5badec5d1bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.057991 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data" (OuterVolumeSpecName: "config-data") pod "914d4f54-76f7-402b-b453-b5badec5d1bb" (UID: "914d4f54-76f7-402b-b453-b5badec5d1bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099679 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8x2m\" (UniqueName: \"kubernetes.io/projected/914d4f54-76f7-402b-b453-b5badec5d1bb-kube-api-access-n8x2m\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099714 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099723 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914d4f54-76f7-402b-b453-b5badec5d1bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099732 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099740 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/914d4f54-76f7-402b-b453-b5badec5d1bb-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.099749 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4knq\" (UniqueName: \"kubernetes.io/projected/c7f08d9c-83a5-4818-992b-904fb159ec36-kube-api-access-b4knq\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.155123 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7f08d9c-83a5-4818-992b-904fb159ec36" (UID: "c7f08d9c-83a5-4818-992b-904fb159ec36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.171172 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data" (OuterVolumeSpecName: "config-data") pod "c7f08d9c-83a5-4818-992b-904fb159ec36" (UID: "c7f08d9c-83a5-4818-992b-904fb159ec36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.201451 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.201484 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f08d9c-83a5-4818-992b-904fb159ec36-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.480844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7f08d9c-83a5-4818-992b-904fb159ec36","Type":"ContainerDied","Data":"8fd5abc603a1f46ab17b6c0731ae2157b226e73f1547a10e5a5a4e1b90abae54"} Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.480914 4805 scope.go:117] "RemoveContainer" containerID="f838df9915baf461ff6f626de60e4d78a2231d34e47e458da0346be485602e9c" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.480966 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.482869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"c841db68-4473-4305-91cc-75ec6f257ac0","Type":"ContainerStarted","Data":"9d880f0beb8235a297510672b05ad0e1710955ff1368adf020ff3a2db483fc09"} Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.487683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7c8e81a5-b0c2-4a31-8383-8022fa10fe96","Type":"ContainerStarted","Data":"4bc6007ceadcfaf4b7a35737ee57b8ca4a525b6b38ab5cf05741449b6a74b35b"} Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.488873 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.490919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"914d4f54-76f7-402b-b453-b5badec5d1bb","Type":"ContainerDied","Data":"f5a69131569b4104572f8c3805e62182635286c403e6702f3a822dfb15a52e6f"} Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.491054 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-central-agent" containerID="cri-o://fb14ba424999982e591effe3afaac3826e21b73611dc4b273dcf2a7f9c9bbd2c" gracePeriod=30 Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.491178 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="proxy-httpd" containerID="cri-o://ba40aac79ba247385ad0835055595ec4e5bd2f6b5927df7ad5c90d0ebf25350c" gracePeriod=30 Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.491219 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="sg-core" containerID="cri-o://a8e46b87df65342fcf5bb8a857c34e1d40d0ebb8f0827f7adcf8d023e0df672b" gracePeriod=30 Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.491249 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-notification-agent" containerID="cri-o://8d36ab16607ad1c9dd9ac4efe5539a1e8707a2a723db9b6c678a04c5388efdca" gracePeriod=30 Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.492576 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.529738 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.147133534 podStartE2EDuration="4.529713259s" podCreationTimestamp="2026-02-17 00:46:17 +0000 UTC" firstStartedPulling="2026-02-17 00:46:19.06747205 +0000 UTC m=+1405.083281448" lastFinishedPulling="2026-02-17 00:46:20.450051775 +0000 UTC m=+1406.465861173" observedRunningTime="2026-02-17 00:46:21.518059435 +0000 UTC m=+1407.533868843" watchObservedRunningTime="2026-02-17 00:46:21.529713259 +0000 UTC m=+1407.545522657" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.553594 4805 scope.go:117] "RemoveContainer" containerID="077f9413eaf07761963bd4c8ed1ede34469ab546d77b384a73809a839c13820e" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.556304 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.01617085 podStartE2EDuration="4.556286289s" podCreationTimestamp="2026-02-17 00:46:17 +0000 UTC" firstStartedPulling="2026-02-17 00:46:18.885786894 +0000 UTC m=+1404.901596292" lastFinishedPulling="2026-02-17 00:46:20.425902343 +0000 UTC m=+1406.441711731" observedRunningTime="2026-02-17 00:46:21.549879431 +0000 UTC m=+1407.565688839" watchObservedRunningTime="2026-02-17 00:46:21.556286289 +0000 UTC m=+1407.572095687" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.583894 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.586285 4805 scope.go:117] "RemoveContainer" containerID="ca50c322fbc6cc974342d4fd9cc9184d3b3addce0e501fa53060ca27d9ddcce6" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.612364 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.639898 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.653142 4805 scope.go:117] "RemoveContainer" containerID="647f5e61f4fad824e69b8e3b7b72a9a15e50feb1eef3fc00d642c22c0a441735" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.657456 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671043 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671542 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-evaluator" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671556 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-evaluator" Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671583 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-log" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671590 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-log" Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671611 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-api" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671618 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-api" Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671627 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-api" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671632 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-api" Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671644 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-listener" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671649 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-listener" Feb 17 00:46:21 crc kubenswrapper[4805]: E0217 00:46:21.671664 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-notifier" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671670 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-notifier" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671851 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-notifier" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671873 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-api" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671880 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-listener" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671891 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" containerName="aodh-evaluator" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671904 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-log" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.671912 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" containerName="nova-api-api" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.673017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.678857 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.690253 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.692567 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.695005 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.695926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.696010 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.696237 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-drlz8" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.696494 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.700978 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.703819 4805 scope.go:117] "RemoveContainer" containerID="937bd9e1f51a14bbe70f454cd781f8b5df6908f9362bac05282f4b16be8c02a4" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.715887 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.782087 4805 scope.go:117] "RemoveContainer" containerID="7d42044813be7120c6c614bcc9cf97b18880ae87fe0e45589a057334b702dedd" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-internal-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816731 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k99c\" (UniqueName: \"kubernetes.io/projected/4684eac1-c5ec-46dd-b3f7-87dba4896232-kube-api-access-9k99c\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-config-data\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxcw\" (UniqueName: \"kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816938 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.816959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-scripts\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.817072 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-public-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.817131 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.817216 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.919708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxcw\" (UniqueName: \"kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.919957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.920827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-scripts\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.920964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-public-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.921114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.921279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.921520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-internal-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.921657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.922027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.922059 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k99c\" (UniqueName: \"kubernetes.io/projected/4684eac1-c5ec-46dd-b3f7-87dba4896232-kube-api-access-9k99c\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.922192 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-config-data\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.927016 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-scripts\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.927127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-internal-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.927150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.927143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.927068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-public-tls-certs\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.928365 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-config-data\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.929506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4684eac1-c5ec-46dd-b3f7-87dba4896232-combined-ca-bundle\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.935424 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxcw\" (UniqueName: \"kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw\") pod \"nova-api-0\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " pod="openstack/nova-api-0" Feb 17 00:46:21 crc kubenswrapper[4805]: I0217 00:46:21.941454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k99c\" (UniqueName: \"kubernetes.io/projected/4684eac1-c5ec-46dd-b3f7-87dba4896232-kube-api-access-9k99c\") pod \"aodh-0\" (UID: \"4684eac1-c5ec-46dd-b3f7-87dba4896232\") " pod="openstack/aodh-0" Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.064928 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.075340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504173 4805 generic.go:334] "Generic (PLEG): container finished" podID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerID="ba40aac79ba247385ad0835055595ec4e5bd2f6b5927df7ad5c90d0ebf25350c" exitCode=0 Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504555 4805 generic.go:334] "Generic (PLEG): container finished" podID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerID="a8e46b87df65342fcf5bb8a857c34e1d40d0ebb8f0827f7adcf8d023e0df672b" exitCode=2 Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504567 4805 generic.go:334] "Generic (PLEG): container finished" podID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerID="8d36ab16607ad1c9dd9ac4efe5539a1e8707a2a723db9b6c678a04c5388efdca" exitCode=0 Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerDied","Data":"ba40aac79ba247385ad0835055595ec4e5bd2f6b5927df7ad5c90d0ebf25350c"} Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerDied","Data":"a8e46b87df65342fcf5bb8a857c34e1d40d0ebb8f0827f7adcf8d023e0df672b"} Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.504646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerDied","Data":"8d36ab16607ad1c9dd9ac4efe5539a1e8707a2a723db9b6c678a04c5388efdca"} Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.595763 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 17 00:46:22 crc kubenswrapper[4805]: W0217 00:46:22.600035 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4684eac1_c5ec_46dd_b3f7_87dba4896232.slice/crio-926ae549fe998e0e5a8c080743d52ae1543032f1b8f6f0afbe6b199e6e3cb6fc WatchSource:0}: Error finding container 926ae549fe998e0e5a8c080743d52ae1543032f1b8f6f0afbe6b199e6e3cb6fc: Status 404 returned error can't find the container with id 926ae549fe998e0e5a8c080743d52ae1543032f1b8f6f0afbe6b199e6e3cb6fc Feb 17 00:46:22 crc kubenswrapper[4805]: W0217 00:46:22.664535 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ab8160c_d9dd_4557_8d49_c432ccec586a.slice/crio-2d082b5b6e61346217c1a76a956909946a035bb7023c0aa9c6c327a9fc6f06ba WatchSource:0}: Error finding container 2d082b5b6e61346217c1a76a956909946a035bb7023c0aa9c6c327a9fc6f06ba: Status 404 returned error can't find the container with id 2d082b5b6e61346217c1a76a956909946a035bb7023c0aa9c6c327a9fc6f06ba Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.665623 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.827257 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914d4f54-76f7-402b-b453-b5badec5d1bb" path="/var/lib/kubelet/pods/914d4f54-76f7-402b-b453-b5badec5d1bb/volumes" Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.827941 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f08d9c-83a5-4818-992b-904fb159ec36" path="/var/lib/kubelet/pods/c7f08d9c-83a5-4818-992b-904fb159ec36/volumes" Feb 17 00:46:22 crc kubenswrapper[4805]: I0217 00:46:22.912060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.224807 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.224866 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.521962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerStarted","Data":"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.522284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerStarted","Data":"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.522299 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerStarted","Data":"2d082b5b6e61346217c1a76a956909946a035bb7023c0aa9c6c327a9fc6f06ba"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.526354 4805 generic.go:334] "Generic (PLEG): container finished" podID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerID="fb14ba424999982e591effe3afaac3826e21b73611dc4b273dcf2a7f9c9bbd2c" exitCode=0 Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.526421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerDied","Data":"fb14ba424999982e591effe3afaac3826e21b73611dc4b273dcf2a7f9c9bbd2c"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.529022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4684eac1-c5ec-46dd-b3f7-87dba4896232","Type":"ContainerStarted","Data":"92cf8777d94ac995f5f395011d6c4e39db6022089617a0c5feec8be317430407"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.529049 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4684eac1-c5ec-46dd-b3f7-87dba4896232","Type":"ContainerStarted","Data":"926ae549fe998e0e5a8c080743d52ae1543032f1b8f6f0afbe6b199e6e3cb6fc"} Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.545733 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.54571075 podStartE2EDuration="2.54571075s" podCreationTimestamp="2026-02-17 00:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:23.54210833 +0000 UTC m=+1409.557917728" watchObservedRunningTime="2026-02-17 00:46:23.54571075 +0000 UTC m=+1409.561520148" Feb 17 00:46:23 crc kubenswrapper[4805]: I0217 00:46:23.947689 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.070550 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.070896 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.070999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.071161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.071209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.071661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.071762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.071876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghbt2\" (UniqueName: \"kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2\") pod \"7fe95195-d873-4aee-8a51-d8986cf5b205\" (UID: \"7fe95195-d873-4aee-8a51-d8986cf5b205\") " Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.072116 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.072714 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.072777 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fe95195-d873-4aee-8a51-d8986cf5b205-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.077279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts" (OuterVolumeSpecName: "scripts") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.077893 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2" (OuterVolumeSpecName: "kube-api-access-ghbt2") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "kube-api-access-ghbt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.114045 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.159922 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.177597 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghbt2\" (UniqueName: \"kubernetes.io/projected/7fe95195-d873-4aee-8a51-d8986cf5b205-kube-api-access-ghbt2\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.177629 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.177640 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.177652 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.190487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data" (OuterVolumeSpecName: "config-data") pod "7fe95195-d873-4aee-8a51-d8986cf5b205" (UID: "7fe95195-d873-4aee-8a51-d8986cf5b205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.279095 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe95195-d873-4aee-8a51-d8986cf5b205-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.543237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fe95195-d873-4aee-8a51-d8986cf5b205","Type":"ContainerDied","Data":"5c2998b368f6c6c7d2310182533854d1c9b9e9b5b940377d87e74682ecd07824"} Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.543277 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.543578 4805 scope.go:117] "RemoveContainer" containerID="ba40aac79ba247385ad0835055595ec4e5bd2f6b5927df7ad5c90d0ebf25350c" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.549182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4684eac1-c5ec-46dd-b3f7-87dba4896232","Type":"ContainerStarted","Data":"5a81ab96ec36aa2f58604e49fb72ecea982a20268e188968706ea539b01998e1"} Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.549249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4684eac1-c5ec-46dd-b3f7-87dba4896232","Type":"ContainerStarted","Data":"01d3cd245d4c225823c200d2bea20892c2ea9ca96102888554ecd0371e12b002"} Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.571955 4805 scope.go:117] "RemoveContainer" containerID="a8e46b87df65342fcf5bb8a857c34e1d40d0ebb8f0827f7adcf8d023e0df672b" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.578262 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.590400 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.602202 4805 scope.go:117] "RemoveContainer" containerID="8d36ab16607ad1c9dd9ac4efe5539a1e8707a2a723db9b6c678a04c5388efdca" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.609926 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:24 crc kubenswrapper[4805]: E0217 00:46:24.610536 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-notification-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610566 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-notification-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: E0217 00:46:24.610599 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="proxy-httpd" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610609 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="proxy-httpd" Feb 17 00:46:24 crc kubenswrapper[4805]: E0217 00:46:24.610625 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-central-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610635 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-central-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: E0217 00:46:24.610676 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="sg-core" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610685 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="sg-core" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610953 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-notification-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.610981 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="ceilometer-central-agent" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.611001 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="sg-core" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.611025 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" containerName="proxy-httpd" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.613742 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.616579 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.616595 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.616788 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.634656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.655548 4805 scope.go:117] "RemoveContainer" containerID="fb14ba424999982e591effe3afaac3826e21b73611dc4b273dcf2a7f9c9bbd2c" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795619 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795708 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795786 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmsk\" (UniqueName: \"kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.795852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.800729 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe95195-d873-4aee-8a51-d8986cf5b205" path="/var/lib/kubelet/pods/7fe95195-d873-4aee-8a51-d8986cf5b205/volumes" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898896 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmsk\" (UniqueName: \"kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.898938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.899409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.899451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.899494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.899922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.903099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.903813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.904836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.905727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.916823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.918308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmsk\" (UniqueName: \"kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk\") pod \"ceilometer-0\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " pod="openstack/ceilometer-0" Feb 17 00:46:24 crc kubenswrapper[4805]: I0217 00:46:24.982535 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:25 crc kubenswrapper[4805]: I0217 00:46:25.502258 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:25 crc kubenswrapper[4805]: I0217 00:46:25.562047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"4684eac1-c5ec-46dd-b3f7-87dba4896232","Type":"ContainerStarted","Data":"2c3783cea9180be5b9c014666fc9aeebad7f6d26bf92bb319e9b6c5638f92d5c"} Feb 17 00:46:25 crc kubenswrapper[4805]: I0217 00:46:25.563638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerStarted","Data":"362a1c880606bb44090bcb1cc5893b424be8c1a43d2b18c2a02a3c21b25e7388"} Feb 17 00:46:26 crc kubenswrapper[4805]: I0217 00:46:26.578811 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerStarted","Data":"2f87850bcce697a354ef3d598968f2d62cd4b4bdb1231f1b9766613e9df3ec35"} Feb 17 00:46:27 crc kubenswrapper[4805]: I0217 00:46:27.613571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerStarted","Data":"f0dd96784ef0a1eaf651dec69ad241cff4adf415f8146fae6953bf2c6658eea7"} Feb 17 00:46:27 crc kubenswrapper[4805]: I0217 00:46:27.690189 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 00:46:27 crc kubenswrapper[4805]: I0217 00:46:27.710305 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=4.402665465 podStartE2EDuration="6.710290952s" podCreationTimestamp="2026-02-17 00:46:21 +0000 UTC" firstStartedPulling="2026-02-17 00:46:22.603809599 +0000 UTC m=+1408.619618997" lastFinishedPulling="2026-02-17 00:46:24.911435086 +0000 UTC m=+1410.927244484" observedRunningTime="2026-02-17 00:46:25.584752902 +0000 UTC m=+1411.600562320" watchObservedRunningTime="2026-02-17 00:46:27.710290952 +0000 UTC m=+1413.726100340" Feb 17 00:46:27 crc kubenswrapper[4805]: I0217 00:46:27.911937 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 00:46:27 crc kubenswrapper[4805]: I0217 00:46:27.941072 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 00:46:28 crc kubenswrapper[4805]: I0217 00:46:28.224913 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 00:46:28 crc kubenswrapper[4805]: I0217 00:46:28.224970 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 00:46:28 crc kubenswrapper[4805]: I0217 00:46:28.232511 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 00:46:28 crc kubenswrapper[4805]: I0217 00:46:28.626229 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerStarted","Data":"3f393c15ea7df46e5dd1dd67ae46d4d4aa5cc4764d4dc85f73d42cd9762691d4"} Feb 17 00:46:28 crc kubenswrapper[4805]: I0217 00:46:28.749515 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 00:46:29 crc kubenswrapper[4805]: I0217 00:46:29.239450 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:29 crc kubenswrapper[4805]: I0217 00:46:29.239498 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:29 crc kubenswrapper[4805]: I0217 00:46:29.674512 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerStarted","Data":"ec0c2871e6afe66d3ea6a3a07ef450509c823f0e90339b32720995708b39b0e5"} Feb 17 00:46:29 crc kubenswrapper[4805]: I0217 00:46:29.721432 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.555427473 podStartE2EDuration="5.721416456s" podCreationTimestamp="2026-02-17 00:46:24 +0000 UTC" firstStartedPulling="2026-02-17 00:46:25.474833163 +0000 UTC m=+1411.490642561" lastFinishedPulling="2026-02-17 00:46:28.640822146 +0000 UTC m=+1414.656631544" observedRunningTime="2026-02-17 00:46:29.720797499 +0000 UTC m=+1415.736606897" watchObservedRunningTime="2026-02-17 00:46:29.721416456 +0000 UTC m=+1415.737225854" Feb 17 00:46:30 crc kubenswrapper[4805]: I0217 00:46:30.683198 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:46:32 crc kubenswrapper[4805]: I0217 00:46:32.065985 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:46:32 crc kubenswrapper[4805]: I0217 00:46:32.067269 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:46:33 crc kubenswrapper[4805]: I0217 00:46:33.148482 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:33 crc kubenswrapper[4805]: I0217 00:46:33.148967 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:46:38 crc kubenswrapper[4805]: I0217 00:46:38.240304 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 00:46:38 crc kubenswrapper[4805]: I0217 00:46:38.241135 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 00:46:38 crc kubenswrapper[4805]: I0217 00:46:38.250356 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 00:46:38 crc kubenswrapper[4805]: I0217 00:46:38.251416 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.409670 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.520632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data\") pod \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.520840 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpbnq\" (UniqueName: \"kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq\") pod \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.520962 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle\") pod \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\" (UID: \"a5230e8c-2abe-4835-8fed-ad359b0f52a2\") " Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.531823 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq" (OuterVolumeSpecName: "kube-api-access-xpbnq") pod "a5230e8c-2abe-4835-8fed-ad359b0f52a2" (UID: "a5230e8c-2abe-4835-8fed-ad359b0f52a2"). InnerVolumeSpecName "kube-api-access-xpbnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.553434 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data" (OuterVolumeSpecName: "config-data") pod "a5230e8c-2abe-4835-8fed-ad359b0f52a2" (UID: "a5230e8c-2abe-4835-8fed-ad359b0f52a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.557174 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5230e8c-2abe-4835-8fed-ad359b0f52a2" (UID: "a5230e8c-2abe-4835-8fed-ad359b0f52a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.624547 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpbnq\" (UniqueName: \"kubernetes.io/projected/a5230e8c-2abe-4835-8fed-ad359b0f52a2-kube-api-access-xpbnq\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.624583 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.624597 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5230e8c-2abe-4835-8fed-ad359b0f52a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.809710 4805 generic.go:334] "Generic (PLEG): container finished" podID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" containerID="415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62" exitCode=137 Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.809845 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.809914 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5230e8c-2abe-4835-8fed-ad359b0f52a2","Type":"ContainerDied","Data":"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62"} Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.810488 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a5230e8c-2abe-4835-8fed-ad359b0f52a2","Type":"ContainerDied","Data":"91b0c6fab66c199fd69f7b81075838d01226df633217c1da60e92256821fcff6"} Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.810532 4805 scope.go:117] "RemoveContainer" containerID="415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.852070 4805 scope.go:117] "RemoveContainer" containerID="415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62" Feb 17 00:46:40 crc kubenswrapper[4805]: E0217 00:46:40.853256 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62\": container with ID starting with 415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62 not found: ID does not exist" containerID="415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.853305 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62"} err="failed to get container status \"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62\": rpc error: code = NotFound desc = could not find container \"415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62\": container with ID starting with 415b34d8f3ad633833f64970ce138f10c7137e82cc405a0222a672783c8bbd62 not found: ID does not exist" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.864479 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.905143 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.914204 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:40 crc kubenswrapper[4805]: E0217 00:46:40.914873 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.914902 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.915286 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.916476 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.928233 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.928464 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.929301 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.948062 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.948168 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.948201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.948994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.949052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9qg\" (UniqueName: \"kubernetes.io/projected/a4bdd596-26e7-491d-84ca-d19f950eb389-kube-api-access-gw9qg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:40 crc kubenswrapper[4805]: I0217 00:46:40.951859 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.051726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.051779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9qg\" (UniqueName: \"kubernetes.io/projected/a4bdd596-26e7-491d-84ca-d19f950eb389-kube-api-access-gw9qg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.052168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.052217 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.052235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.055944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.056306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.056732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.057027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4bdd596-26e7-491d-84ca-d19f950eb389-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.070110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9qg\" (UniqueName: \"kubernetes.io/projected/a4bdd596-26e7-491d-84ca-d19f950eb389-kube-api-access-gw9qg\") pod \"nova-cell1-novncproxy-0\" (UID: \"a4bdd596-26e7-491d-84ca-d19f950eb389\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.246664 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.780985 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 00:46:41 crc kubenswrapper[4805]: W0217 00:46:41.782051 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bdd596_26e7_491d_84ca_d19f950eb389.slice/crio-fcd7aa11287c1845cab88205c18aea74861d5fde4c77f4091164e856e53d0b49 WatchSource:0}: Error finding container fcd7aa11287c1845cab88205c18aea74861d5fde4c77f4091164e856e53d0b49: Status 404 returned error can't find the container with id fcd7aa11287c1845cab88205c18aea74861d5fde4c77f4091164e856e53d0b49 Feb 17 00:46:41 crc kubenswrapper[4805]: I0217 00:46:41.824877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4bdd596-26e7-491d-84ca-d19f950eb389","Type":"ContainerStarted","Data":"fcd7aa11287c1845cab88205c18aea74861d5fde4c77f4091164e856e53d0b49"} Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.073562 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.074087 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.081497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.087734 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.811861 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5230e8c-2abe-4835-8fed-ad359b0f52a2" path="/var/lib/kubelet/pods/a5230e8c-2abe-4835-8fed-ad359b0f52a2/volumes" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.854707 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a4bdd596-26e7-491d-84ca-d19f950eb389","Type":"ContainerStarted","Data":"987c9507155fedd8a3e321df49dbdeb282f6f78fa4649367b1aadd9888c741c0"} Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.855211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.861076 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 00:46:42 crc kubenswrapper[4805]: I0217 00:46:42.875475 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.875459485 podStartE2EDuration="2.875459485s" podCreationTimestamp="2026-02-17 00:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:42.875136596 +0000 UTC m=+1428.890946034" watchObservedRunningTime="2026-02-17 00:46:42.875459485 +0000 UTC m=+1428.891268883" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.064511 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.066177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.078656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100480 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8js\" (UniqueName: \"kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.100682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.202918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc8js\" (UniqueName: \"kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.203810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.203810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.204187 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.206478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.206980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.220147 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc8js\" (UniqueName: \"kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js\") pod \"dnsmasq-dns-6b7bbf7cf9-glxm7\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.399454 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:43 crc kubenswrapper[4805]: I0217 00:46:43.886844 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:46:44 crc kubenswrapper[4805]: I0217 00:46:44.880693 4805 generic.go:334] "Generic (PLEG): container finished" podID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerID="ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18" exitCode=0 Feb 17 00:46:44 crc kubenswrapper[4805]: I0217 00:46:44.881423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" event={"ID":"c232df1e-ad0d-4b23-9e2c-0c3494aee55b","Type":"ContainerDied","Data":"ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18"} Feb 17 00:46:44 crc kubenswrapper[4805]: I0217 00:46:44.881466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" event={"ID":"c232df1e-ad0d-4b23-9e2c-0c3494aee55b","Type":"ContainerStarted","Data":"dc853fe8bbc6ee4ff002909b0628f19119dbb1c4ba5db133e5e25e5e9c5d4d89"} Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.352209 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.622441 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.623144 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="proxy-httpd" containerID="cri-o://ec0c2871e6afe66d3ea6a3a07ef450509c823f0e90339b32720995708b39b0e5" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.623153 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-notification-agent" containerID="cri-o://f0dd96784ef0a1eaf651dec69ad241cff4adf415f8146fae6953bf2c6658eea7" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.623153 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="sg-core" containerID="cri-o://3f393c15ea7df46e5dd1dd67ae46d4d4aa5cc4764d4dc85f73d42cd9762691d4" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.623087 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-central-agent" containerID="cri-o://2f87850bcce697a354ef3d598968f2d62cd4b4bdb1231f1b9766613e9df3ec35" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.635304 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.238:3000/\": EOF" Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.899263 4805 generic.go:334] "Generic (PLEG): container finished" podID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerID="ec0c2871e6afe66d3ea6a3a07ef450509c823f0e90339b32720995708b39b0e5" exitCode=0 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.899295 4805 generic.go:334] "Generic (PLEG): container finished" podID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerID="3f393c15ea7df46e5dd1dd67ae46d4d4aa5cc4764d4dc85f73d42cd9762691d4" exitCode=2 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.899344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerDied","Data":"ec0c2871e6afe66d3ea6a3a07ef450509c823f0e90339b32720995708b39b0e5"} Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.899383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerDied","Data":"3f393c15ea7df46e5dd1dd67ae46d4d4aa5cc4764d4dc85f73d42cd9762691d4"} Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.901098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" event={"ID":"c232df1e-ad0d-4b23-9e2c-0c3494aee55b","Type":"ContainerStarted","Data":"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d"} Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.901294 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.901367 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-api" containerID="cri-o://64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.901291 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-log" containerID="cri-o://07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e" gracePeriod=30 Feb 17 00:46:45 crc kubenswrapper[4805]: I0217 00:46:45.921494 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" podStartSLOduration=2.921479539 podStartE2EDuration="2.921479539s" podCreationTimestamp="2026-02-17 00:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:45.920183363 +0000 UTC m=+1431.935992761" watchObservedRunningTime="2026-02-17 00:46:45.921479539 +0000 UTC m=+1431.937288937" Feb 17 00:46:46 crc kubenswrapper[4805]: I0217 00:46:46.246942 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:46 crc kubenswrapper[4805]: I0217 00:46:46.913426 4805 generic.go:334] "Generic (PLEG): container finished" podID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerID="2f87850bcce697a354ef3d598968f2d62cd4b4bdb1231f1b9766613e9df3ec35" exitCode=0 Feb 17 00:46:46 crc kubenswrapper[4805]: I0217 00:46:46.913767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerDied","Data":"2f87850bcce697a354ef3d598968f2d62cd4b4bdb1231f1b9766613e9df3ec35"} Feb 17 00:46:46 crc kubenswrapper[4805]: I0217 00:46:46.915354 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerID="07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e" exitCode=143 Feb 17 00:46:46 crc kubenswrapper[4805]: I0217 00:46:46.916365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerDied","Data":"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e"} Feb 17 00:46:48 crc kubenswrapper[4805]: I0217 00:46:48.939013 4805 generic.go:334] "Generic (PLEG): container finished" podID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerID="f0dd96784ef0a1eaf651dec69ad241cff4adf415f8146fae6953bf2c6658eea7" exitCode=0 Feb 17 00:46:48 crc kubenswrapper[4805]: I0217 00:46:48.939083 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerDied","Data":"f0dd96784ef0a1eaf651dec69ad241cff4adf415f8146fae6953bf2c6658eea7"} Feb 17 00:46:48 crc kubenswrapper[4805]: I0217 00:46:48.939314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"43acf0a2-63f5-48f0-ae06-b832705ef2a6","Type":"ContainerDied","Data":"362a1c880606bb44090bcb1cc5893b424be8c1a43d2b18c2a02a3c21b25e7388"} Feb 17 00:46:48 crc kubenswrapper[4805]: I0217 00:46:48.939350 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="362a1c880606bb44090bcb1cc5893b424be8c1a43d2b18c2a02a3c21b25e7388" Feb 17 00:46:48 crc kubenswrapper[4805]: I0217 00:46:48.957721 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.071976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072044 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072150 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072218 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhmsk\" (UniqueName: \"kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072426 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072451 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.072479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd\") pod \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\" (UID: \"43acf0a2-63f5-48f0-ae06-b832705ef2a6\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.073363 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.073652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.106105 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts" (OuterVolumeSpecName: "scripts") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.115458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk" (OuterVolumeSpecName: "kube-api-access-nhmsk") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "kube-api-access-nhmsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.119389 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.174812 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhmsk\" (UniqueName: \"kubernetes.io/projected/43acf0a2-63f5-48f0-ae06-b832705ef2a6-kube-api-access-nhmsk\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.174851 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.174864 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.174876 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.174887 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/43acf0a2-63f5-48f0-ae06-b832705ef2a6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.179091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.207015 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.216422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data" (OuterVolumeSpecName: "config-data") pod "43acf0a2-63f5-48f0-ae06-b832705ef2a6" (UID: "43acf0a2-63f5-48f0-ae06-b832705ef2a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.277315 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.277371 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.277386 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/43acf0a2-63f5-48f0-ae06-b832705ef2a6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.445233 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.582771 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle\") pod \"7ab8160c-d9dd-4557-8d49-c432ccec586a\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.583053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs\") pod \"7ab8160c-d9dd-4557-8d49-c432ccec586a\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.583100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmxcw\" (UniqueName: \"kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw\") pod \"7ab8160c-d9dd-4557-8d49-c432ccec586a\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.583118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data\") pod \"7ab8160c-d9dd-4557-8d49-c432ccec586a\" (UID: \"7ab8160c-d9dd-4557-8d49-c432ccec586a\") " Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.583838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs" (OuterVolumeSpecName: "logs") pod "7ab8160c-d9dd-4557-8d49-c432ccec586a" (UID: "7ab8160c-d9dd-4557-8d49-c432ccec586a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.590025 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw" (OuterVolumeSpecName: "kube-api-access-kmxcw") pod "7ab8160c-d9dd-4557-8d49-c432ccec586a" (UID: "7ab8160c-d9dd-4557-8d49-c432ccec586a"). InnerVolumeSpecName "kube-api-access-kmxcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.619713 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab8160c-d9dd-4557-8d49-c432ccec586a" (UID: "7ab8160c-d9dd-4557-8d49-c432ccec586a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.626479 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data" (OuterVolumeSpecName: "config-data") pod "7ab8160c-d9dd-4557-8d49-c432ccec586a" (UID: "7ab8160c-d9dd-4557-8d49-c432ccec586a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.688809 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.688831 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab8160c-d9dd-4557-8d49-c432ccec586a-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.688847 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmxcw\" (UniqueName: \"kubernetes.io/projected/7ab8160c-d9dd-4557-8d49-c432ccec586a-kube-api-access-kmxcw\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.688857 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab8160c-d9dd-4557-8d49-c432ccec586a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.950861 4805 generic.go:334] "Generic (PLEG): container finished" podID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerID="64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd" exitCode=0 Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.950918 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.950951 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.952015 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerDied","Data":"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd"} Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.952293 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ab8160c-d9dd-4557-8d49-c432ccec586a","Type":"ContainerDied","Data":"2d082b5b6e61346217c1a76a956909946a035bb7023c0aa9c6c327a9fc6f06ba"} Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.952372 4805 scope.go:117] "RemoveContainer" containerID="64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd" Feb 17 00:46:49 crc kubenswrapper[4805]: I0217 00:46:49.984284 4805 scope.go:117] "RemoveContainer" containerID="07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.005254 4805 scope.go:117] "RemoveContainer" containerID="64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.005816 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd\": container with ID starting with 64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd not found: ID does not exist" containerID="64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.005855 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd"} err="failed to get container status \"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd\": rpc error: code = NotFound desc = could not find container \"64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd\": container with ID starting with 64d54afc81a2dbfa07e49ca4be649e70b8e3abb957be74030a741198e63f4ddd not found: ID does not exist" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.005882 4805 scope.go:117] "RemoveContainer" containerID="07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.006167 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e\": container with ID starting with 07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e not found: ID does not exist" containerID="07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.006181 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e"} err="failed to get container status \"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e\": rpc error: code = NotFound desc = could not find container \"07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e\": container with ID starting with 07678e2b14dd87df67553d8ab492e8892bc05a1af0ddb495599c5786aa3a521e not found: ID does not exist" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.055390 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.105338 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.131105 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.140660 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.151481 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.151934 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-log" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.151951 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-log" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.151974 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-central-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.151981 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-central-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.151989 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="sg-core" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.151995 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="sg-core" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.152018 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-notification-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152024 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-notification-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.152042 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-api" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152048 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-api" Feb 17 00:46:50 crc kubenswrapper[4805]: E0217 00:46:50.152061 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="proxy-httpd" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152067 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="proxy-httpd" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152243 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-central-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152258 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="ceilometer-notification-agent" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152270 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-api" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152281 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="proxy-httpd" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152293 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" containerName="nova-api-log" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.152303 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" containerName="sg-core" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.154802 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.156956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.157166 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.157671 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.162210 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.172508 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.174382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.177764 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.178658 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.178736 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.195663 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300608 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300733 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbmb\" (UniqueName: \"kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.300956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301118 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdh5j\" (UniqueName: \"kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.301759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410446 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410639 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410805 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410825 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbmb\" (UniqueName: \"kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdh5j\" (UniqueName: \"kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.410953 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.411943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.412864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.413381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.416862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.418355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.423889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.426495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.431022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.431356 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdh5j\" (UniqueName: \"kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.433550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.434559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.442854 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.444798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.447499 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbmb\" (UniqueName: \"kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb\") pod \"ceilometer-0\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.476979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.494514 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.800303 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43acf0a2-63f5-48f0-ae06-b832705ef2a6" path="/var/lib/kubelet/pods/43acf0a2-63f5-48f0-ae06-b832705ef2a6/volumes" Feb 17 00:46:50 crc kubenswrapper[4805]: I0217 00:46:50.804388 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab8160c-d9dd-4557-8d49-c432ccec586a" path="/var/lib/kubelet/pods/7ab8160c-d9dd-4557-8d49-c432ccec586a/volumes" Feb 17 00:46:51 crc kubenswrapper[4805]: I0217 00:46:51.063571 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:46:51 crc kubenswrapper[4805]: I0217 00:46:51.148499 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:46:51 crc kubenswrapper[4805]: W0217 00:46:51.149841 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4d499ab_baf7_4e88_8631_38170125d756.slice/crio-9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f WatchSource:0}: Error finding container 9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f: Status 404 returned error can't find the container with id 9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f Feb 17 00:46:51 crc kubenswrapper[4805]: I0217 00:46:51.246956 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:51 crc kubenswrapper[4805]: I0217 00:46:51.268847 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.005499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerStarted","Data":"e926f9924473eff08fe262e6df894ff328407d82072b25773d16d9854397d722"} Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.005861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerStarted","Data":"9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f"} Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.008402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerStarted","Data":"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387"} Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.008468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerStarted","Data":"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad"} Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.008489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerStarted","Data":"2942c53d8b07f0695915e68d1ea160049d733be87f4e2de80a683ed7a319a2a1"} Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.038810 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.043800 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.043779639 podStartE2EDuration="2.043779639s" podCreationTimestamp="2026-02-17 00:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:52.030740666 +0000 UTC m=+1438.046550064" watchObservedRunningTime="2026-02-17 00:46:52.043779639 +0000 UTC m=+1438.059589037" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.228342 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8lhxq"] Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.230162 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.233349 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.234682 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.242915 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lhxq"] Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.355613 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.355676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.356208 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4bc\" (UniqueName: \"kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.356266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.461381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.461460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.461527 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4bc\" (UniqueName: \"kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.461581 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.466198 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.466712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.466994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.481931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4bc\" (UniqueName: \"kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc\") pod \"nova-cell1-cell-mapping-8lhxq\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:52 crc kubenswrapper[4805]: I0217 00:46:52.655591 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.021377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerStarted","Data":"94f88c087d451b909e3b5f712ea7d45c1990589e85bab58f20ae21d31efff3c0"} Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.077588 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.077648 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:46:53 crc kubenswrapper[4805]: W0217 00:46:53.130729 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod283dbd3a_e5ca_4e5f_beb2_59c9498f0fb4.slice/crio-4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933 WatchSource:0}: Error finding container 4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933: Status 404 returned error can't find the container with id 4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933 Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.140145 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lhxq"] Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.400527 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.467421 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:53 crc kubenswrapper[4805]: I0217 00:46:53.467685 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="dnsmasq-dns" containerID="cri-o://18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a" gracePeriod=10 Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.018938 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.033078 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerStarted","Data":"ec49b0f8d358830df6e4c2847b0efbe4ca099ea1ca72b312be86054dc6d91659"} Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.038608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lhxq" event={"ID":"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4","Type":"ContainerStarted","Data":"fde80f8efc7c4b6e4801b99af9d81b1bf763d9ffb205267c5e8bb2b173764ae9"} Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.038656 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lhxq" event={"ID":"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4","Type":"ContainerStarted","Data":"4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933"} Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.041248 4805 generic.go:334] "Generic (PLEG): container finished" podID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerID="18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a" exitCode=0 Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.041276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" event={"ID":"e7da63b3-96f0-46ef-8ff4-e5ec29821564","Type":"ContainerDied","Data":"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a"} Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.041297 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.041310 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-gs54b" event={"ID":"e7da63b3-96f0-46ef-8ff4-e5ec29821564","Type":"ContainerDied","Data":"daff57d5e4bc336963aef196ebe0eef20fc01d06caa539e457da219aaa247add"} Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.041357 4805 scope.go:117] "RemoveContainer" containerID="18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.092828 4805 scope.go:117] "RemoveContainer" containerID="d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.115033 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.115122 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.117950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.118004 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7mqg\" (UniqueName: \"kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.118065 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.118207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb\") pod \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\" (UID: \"e7da63b3-96f0-46ef-8ff4-e5ec29821564\") " Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.132659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg" (OuterVolumeSpecName: "kube-api-access-h7mqg") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "kube-api-access-h7mqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.135047 4805 scope.go:117] "RemoveContainer" containerID="18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a" Feb 17 00:46:54 crc kubenswrapper[4805]: E0217 00:46:54.138730 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a\": container with ID starting with 18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a not found: ID does not exist" containerID="18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.138764 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a"} err="failed to get container status \"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a\": rpc error: code = NotFound desc = could not find container \"18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a\": container with ID starting with 18173c67dc7a654e315fe1f15e6a0e5d7343767e39c15833a877be5083f7f42a not found: ID does not exist" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.138782 4805 scope.go:117] "RemoveContainer" containerID="d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9" Feb 17 00:46:54 crc kubenswrapper[4805]: E0217 00:46:54.140495 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9\": container with ID starting with d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9 not found: ID does not exist" containerID="d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.140516 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9"} err="failed to get container status \"d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9\": rpc error: code = NotFound desc = could not find container \"d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9\": container with ID starting with d85754b4ab82fd09371d12ea5e02a1f5b1f60a08c7c44e55ae92e21841d8b4e9 not found: ID does not exist" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.179890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.193967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.209129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.211705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.220411 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.220432 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.220444 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7mqg\" (UniqueName: \"kubernetes.io/projected/e7da63b3-96f0-46ef-8ff4-e5ec29821564-kube-api-access-h7mqg\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.220453 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.220462 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.236793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config" (OuterVolumeSpecName: "config") pod "e7da63b3-96f0-46ef-8ff4-e5ec29821564" (UID: "e7da63b3-96f0-46ef-8ff4-e5ec29821564"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.322166 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7da63b3-96f0-46ef-8ff4-e5ec29821564-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.375636 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8lhxq" podStartSLOduration=2.375617599 podStartE2EDuration="2.375617599s" podCreationTimestamp="2026-02-17 00:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:46:54.060667235 +0000 UTC m=+1440.076476633" watchObservedRunningTime="2026-02-17 00:46:54.375617599 +0000 UTC m=+1440.391426997" Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.384038 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.397732 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-gs54b"] Feb 17 00:46:54 crc kubenswrapper[4805]: I0217 00:46:54.852342 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" path="/var/lib/kubelet/pods/e7da63b3-96f0-46ef-8ff4-e5ec29821564/volumes" Feb 17 00:46:55 crc kubenswrapper[4805]: I0217 00:46:55.072252 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerStarted","Data":"a3bf4eaf6845bb8bc7a63f36847355f1129d1065934ae27afdd6fad8ce4d6068"} Feb 17 00:46:55 crc kubenswrapper[4805]: I0217 00:46:55.072901 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:46:58 crc kubenswrapper[4805]: I0217 00:46:58.119627 4805 generic.go:334] "Generic (PLEG): container finished" podID="283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" containerID="fde80f8efc7c4b6e4801b99af9d81b1bf763d9ffb205267c5e8bb2b173764ae9" exitCode=0 Feb 17 00:46:58 crc kubenswrapper[4805]: I0217 00:46:58.119684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lhxq" event={"ID":"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4","Type":"ContainerDied","Data":"fde80f8efc7c4b6e4801b99af9d81b1bf763d9ffb205267c5e8bb2b173764ae9"} Feb 17 00:46:58 crc kubenswrapper[4805]: I0217 00:46:58.153062 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.10549368 podStartE2EDuration="8.153037567s" podCreationTimestamp="2026-02-17 00:46:50 +0000 UTC" firstStartedPulling="2026-02-17 00:46:51.152879787 +0000 UTC m=+1437.168689185" lastFinishedPulling="2026-02-17 00:46:54.200423674 +0000 UTC m=+1440.216233072" observedRunningTime="2026-02-17 00:46:55.114444969 +0000 UTC m=+1441.130254367" watchObservedRunningTime="2026-02-17 00:46:58.153037567 +0000 UTC m=+1444.168847015" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.676685 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.850963 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle\") pod \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.851025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data\") pod \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.851050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts\") pod \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.851167 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q4bc\" (UniqueName: \"kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc\") pod \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\" (UID: \"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4\") " Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.856882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts" (OuterVolumeSpecName: "scripts") pod "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" (UID: "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.861640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc" (OuterVolumeSpecName: "kube-api-access-2q4bc") pod "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" (UID: "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4"). InnerVolumeSpecName "kube-api-access-2q4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.893447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" (UID: "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.895250 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data" (OuterVolumeSpecName: "config-data") pod "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" (UID: "283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.953871 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.953908 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.953917 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:46:59 crc kubenswrapper[4805]: I0217 00:46:59.953926 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q4bc\" (UniqueName: \"kubernetes.io/projected/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4-kube-api-access-2q4bc\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.138594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lhxq" event={"ID":"283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4","Type":"ContainerDied","Data":"4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933"} Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.139099 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3c62e4e769d69fcbf1b2eb011676cf083e5ee7a5f1270278f746e4ee675933" Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.138714 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lhxq" Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.304518 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.304776 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2936e576-b736-4e51-af25-bf06d2959067" containerName="nova-scheduler-scheduler" containerID="cri-o://a6dbd8064ac6fddbdb937b04650ebd5dafbcb552c7d9dc7241156aaf34fae465" gracePeriod=30 Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.327882 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.328125 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-log" containerID="cri-o://a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" gracePeriod=30 Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.328243 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-api" containerID="cri-o://5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" gracePeriod=30 Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.341258 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.341718 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" containerID="cri-o://d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237" gracePeriod=30 Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.341788 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" containerID="cri-o://c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898" gracePeriod=30 Feb 17 00:47:00 crc kubenswrapper[4805]: I0217 00:47:00.952357 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.005807 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdh5j\" (UniqueName: \"kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.005991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.006871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.006944 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.006977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.006998 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs\") pod \"2efa96de-6a3c-457c-b55f-45e97212613e\" (UID: \"2efa96de-6a3c-457c-b55f-45e97212613e\") " Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.007225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs" (OuterVolumeSpecName: "logs") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.007718 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2efa96de-6a3c-457c-b55f-45e97212613e-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.025704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j" (OuterVolumeSpecName: "kube-api-access-sdh5j") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "kube-api-access-sdh5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.048408 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data" (OuterVolumeSpecName: "config-data") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.057841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.080622 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.095921 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2efa96de-6a3c-457c-b55f-45e97212613e" (UID: "2efa96de-6a3c-457c-b55f-45e97212613e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.109966 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.109998 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.110009 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdh5j\" (UniqueName: \"kubernetes.io/projected/2efa96de-6a3c-457c-b55f-45e97212613e-kube-api-access-sdh5j\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.110020 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.110031 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2efa96de-6a3c-457c-b55f-45e97212613e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.157953 4805 generic.go:334] "Generic (PLEG): container finished" podID="ef39d973-397f-4d39-9e6a-7debbc762911" containerID="d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237" exitCode=143 Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.158016 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerDied","Data":"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237"} Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159852 4805 generic.go:334] "Generic (PLEG): container finished" podID="2efa96de-6a3c-457c-b55f-45e97212613e" containerID="5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" exitCode=0 Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159869 4805 generic.go:334] "Generic (PLEG): container finished" podID="2efa96de-6a3c-457c-b55f-45e97212613e" containerID="a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" exitCode=143 Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerDied","Data":"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387"} Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerDied","Data":"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad"} Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2efa96de-6a3c-457c-b55f-45e97212613e","Type":"ContainerDied","Data":"2942c53d8b07f0695915e68d1ea160049d733be87f4e2de80a683ed7a319a2a1"} Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159924 4805 scope.go:117] "RemoveContainer" containerID="5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.159942 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.186140 4805 scope.go:117] "RemoveContainer" containerID="a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.215430 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.216847 4805 scope.go:117] "RemoveContainer" containerID="5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.217838 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387\": container with ID starting with 5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387 not found: ID does not exist" containerID="5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.217889 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387"} err="failed to get container status \"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387\": rpc error: code = NotFound desc = could not find container \"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387\": container with ID starting with 5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387 not found: ID does not exist" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.217919 4805 scope.go:117] "RemoveContainer" containerID="a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.223656 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad\": container with ID starting with a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad not found: ID does not exist" containerID="a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.223708 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad"} err="failed to get container status \"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad\": rpc error: code = NotFound desc = could not find container \"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad\": container with ID starting with a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad not found: ID does not exist" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.223740 4805 scope.go:117] "RemoveContainer" containerID="5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.224180 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387"} err="failed to get container status \"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387\": rpc error: code = NotFound desc = could not find container \"5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387\": container with ID starting with 5915eb29e78ddf3e4a87531aaab9d40ee379202828e600ce66adde02476cf387 not found: ID does not exist" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.224208 4805 scope.go:117] "RemoveContainer" containerID="a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.224455 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad"} err="failed to get container status \"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad\": rpc error: code = NotFound desc = could not find container \"a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad\": container with ID starting with a9723043ee55a5cbb679951382cd203a9adc9753975b350c05930bbaff7b41ad not found: ID does not exist" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.234043 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.245986 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.246543 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="dnsmasq-dns" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="dnsmasq-dns" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.246595 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-api" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246604 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-api" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.246625 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-log" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246651 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-log" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.246677 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" containerName="nova-manage" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246685 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" containerName="nova-manage" Feb 17 00:47:01 crc kubenswrapper[4805]: E0217 00:47:01.246716 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="init" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246724 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="init" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246946 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-api" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.246989 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" containerName="nova-manage" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.247001 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" containerName="nova-api-log" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.247021 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7da63b3-96f0-46ef-8ff4-e5ec29821564" containerName="dnsmasq-dns" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.248371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.255768 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.256402 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.256460 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.256761 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.311915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.311986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvhb\" (UniqueName: \"kubernetes.io/projected/a6481c50-bc40-4ee2-a161-127c2d2d23df-kube-api-access-ppvhb\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.312040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6481c50-bc40-4ee2-a161-127c2d2d23df-logs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.312105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.312162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.312214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-config-data\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.412850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.413127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-config-data\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.413265 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.413356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvhb\" (UniqueName: \"kubernetes.io/projected/a6481c50-bc40-4ee2-a161-127c2d2d23df-kube-api-access-ppvhb\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.413438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6481c50-bc40-4ee2-a161-127c2d2d23df-logs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.413518 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.414858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6481c50-bc40-4ee2-a161-127c2d2d23df-logs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.418128 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-config-data\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.418859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-public-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.419050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.419239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6481c50-bc40-4ee2-a161-127c2d2d23df-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.436861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvhb\" (UniqueName: \"kubernetes.io/projected/a6481c50-bc40-4ee2-a161-127c2d2d23df-kube-api-access-ppvhb\") pod \"nova-api-0\" (UID: \"a6481c50-bc40-4ee2-a161-127c2d2d23df\") " pod="openstack/nova-api-0" Feb 17 00:47:01 crc kubenswrapper[4805]: I0217 00:47:01.574793 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.106041 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.182209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6481c50-bc40-4ee2-a161-127c2d2d23df","Type":"ContainerStarted","Data":"31c88674db0b3f70a6af032bddbd61dca2a082da9cdba122091d9d49113219d5"} Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.185503 4805 generic.go:334] "Generic (PLEG): container finished" podID="2936e576-b736-4e51-af25-bf06d2959067" containerID="a6dbd8064ac6fddbdb937b04650ebd5dafbcb552c7d9dc7241156aaf34fae465" exitCode=0 Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.185548 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2936e576-b736-4e51-af25-bf06d2959067","Type":"ContainerDied","Data":"a6dbd8064ac6fddbdb937b04650ebd5dafbcb552c7d9dc7241156aaf34fae465"} Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.503900 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.641812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data\") pod \"2936e576-b736-4e51-af25-bf06d2959067\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.642027 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle\") pod \"2936e576-b736-4e51-af25-bf06d2959067\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.642092 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2wzk\" (UniqueName: \"kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk\") pod \"2936e576-b736-4e51-af25-bf06d2959067\" (UID: \"2936e576-b736-4e51-af25-bf06d2959067\") " Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.646699 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk" (OuterVolumeSpecName: "kube-api-access-f2wzk") pod "2936e576-b736-4e51-af25-bf06d2959067" (UID: "2936e576-b736-4e51-af25-bf06d2959067"). InnerVolumeSpecName "kube-api-access-f2wzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.678287 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data" (OuterVolumeSpecName: "config-data") pod "2936e576-b736-4e51-af25-bf06d2959067" (UID: "2936e576-b736-4e51-af25-bf06d2959067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.731504 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2936e576-b736-4e51-af25-bf06d2959067" (UID: "2936e576-b736-4e51-af25-bf06d2959067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.745221 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.745247 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2936e576-b736-4e51-af25-bf06d2959067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.745259 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2wzk\" (UniqueName: \"kubernetes.io/projected/2936e576-b736-4e51-af25-bf06d2959067-kube-api-access-f2wzk\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:02 crc kubenswrapper[4805]: I0217 00:47:02.799999 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2efa96de-6a3c-457c-b55f-45e97212613e" path="/var/lib/kubelet/pods/2efa96de-6a3c-457c-b55f-45e97212613e/volumes" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.197889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6481c50-bc40-4ee2-a161-127c2d2d23df","Type":"ContainerStarted","Data":"07da22a86084d01eea1d16ce36d3763e65419c2c7475657083a3712e40b65346"} Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.198190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a6481c50-bc40-4ee2-a161-127c2d2d23df","Type":"ContainerStarted","Data":"53570914bd7ee2e9d26d19cdef9b75f778bb5eb3aff32a70c2fe741f01fdf584"} Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.200988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2936e576-b736-4e51-af25-bf06d2959067","Type":"ContainerDied","Data":"c82bb4497971f5ae06ad00c644850d06c22f3c41cc903275f2825b4d0313b0e2"} Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.201042 4805 scope.go:117] "RemoveContainer" containerID="a6dbd8064ac6fddbdb937b04650ebd5dafbcb552c7d9dc7241156aaf34fae465" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.201171 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.221569 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.221553563 podStartE2EDuration="2.221553563s" podCreationTimestamp="2026-02-17 00:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:47:03.218010204 +0000 UTC m=+1449.233819602" watchObservedRunningTime="2026-02-17 00:47:03.221553563 +0000 UTC m=+1449.237362961" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.244567 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.254474 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.263926 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:03 crc kubenswrapper[4805]: E0217 00:47:03.264698 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2936e576-b736-4e51-af25-bf06d2959067" containerName="nova-scheduler-scheduler" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.264794 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2936e576-b736-4e51-af25-bf06d2959067" containerName="nova-scheduler-scheduler" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.265178 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2936e576-b736-4e51-af25-bf06d2959067" containerName="nova-scheduler-scheduler" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.266532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.269016 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.273134 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.357190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.357247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncrn\" (UniqueName: \"kubernetes.io/projected/ad4deee6-2619-4e76-9a81-9adbaa868ee2-kube-api-access-wncrn\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.357299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-config-data\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.458656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.458713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncrn\" (UniqueName: \"kubernetes.io/projected/ad4deee6-2619-4e76-9a81-9adbaa868ee2-kube-api-access-wncrn\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.458745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-config-data\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.463140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.473915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad4deee6-2619-4e76-9a81-9adbaa868ee2-config-data\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.481524 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncrn\" (UniqueName: \"kubernetes.io/projected/ad4deee6-2619-4e76-9a81-9adbaa868ee2-kube-api-access-wncrn\") pod \"nova-scheduler-0\" (UID: \"ad4deee6-2619-4e76-9a81-9adbaa868ee2\") " pod="openstack/nova-scheduler-0" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.489346 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": read tcp 10.217.0.2:38378->10.217.0.233:8775: read: connection reset by peer" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.489690 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.233:8775/\": read tcp 10.217.0.2:38380->10.217.0.233:8775: read: connection reset by peer" Feb 17 00:47:03 crc kubenswrapper[4805]: I0217 00:47:03.583687 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.081604 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.173634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle\") pod \"ef39d973-397f-4d39-9e6a-7debbc762911\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.173737 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data\") pod \"ef39d973-397f-4d39-9e6a-7debbc762911\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.173836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs\") pod \"ef39d973-397f-4d39-9e6a-7debbc762911\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.173929 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vtlw\" (UniqueName: \"kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw\") pod \"ef39d973-397f-4d39-9e6a-7debbc762911\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.173973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs\") pod \"ef39d973-397f-4d39-9e6a-7debbc762911\" (UID: \"ef39d973-397f-4d39-9e6a-7debbc762911\") " Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.175850 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs" (OuterVolumeSpecName: "logs") pod "ef39d973-397f-4d39-9e6a-7debbc762911" (UID: "ef39d973-397f-4d39-9e6a-7debbc762911"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.191544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw" (OuterVolumeSpecName: "kube-api-access-7vtlw") pod "ef39d973-397f-4d39-9e6a-7debbc762911" (UID: "ef39d973-397f-4d39-9e6a-7debbc762911"). InnerVolumeSpecName "kube-api-access-7vtlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.205390 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.222385 4805 generic.go:334] "Generic (PLEG): container finished" podID="ef39d973-397f-4d39-9e6a-7debbc762911" containerID="c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898" exitCode=0 Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.222420 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.222479 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerDied","Data":"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898"} Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.222523 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ef39d973-397f-4d39-9e6a-7debbc762911","Type":"ContainerDied","Data":"0710179c127e5f937765f54a52ff1542eb7c9a3cc31a0a0b6da15c12759cc759"} Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.222542 4805 scope.go:117] "RemoveContainer" containerID="c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.235434 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef39d973-397f-4d39-9e6a-7debbc762911" (UID: "ef39d973-397f-4d39-9e6a-7debbc762911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.235506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data" (OuterVolumeSpecName: "config-data") pod "ef39d973-397f-4d39-9e6a-7debbc762911" (UID: "ef39d973-397f-4d39-9e6a-7debbc762911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.241897 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ef39d973-397f-4d39-9e6a-7debbc762911" (UID: "ef39d973-397f-4d39-9e6a-7debbc762911"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.257356 4805 scope.go:117] "RemoveContainer" containerID="d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.277561 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef39d973-397f-4d39-9e6a-7debbc762911-logs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.277871 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vtlw\" (UniqueName: \"kubernetes.io/projected/ef39d973-397f-4d39-9e6a-7debbc762911-kube-api-access-7vtlw\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.277886 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.277898 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.277910 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef39d973-397f-4d39-9e6a-7debbc762911-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.301636 4805 scope.go:117] "RemoveContainer" containerID="c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898" Feb 17 00:47:04 crc kubenswrapper[4805]: E0217 00:47:04.304977 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898\": container with ID starting with c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898 not found: ID does not exist" containerID="c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.305020 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898"} err="failed to get container status \"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898\": rpc error: code = NotFound desc = could not find container \"c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898\": container with ID starting with c730ec00238340d19e6d1666e390c8978cea223ad0a686bbdc2a9e1b610a4898 not found: ID does not exist" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.305050 4805 scope.go:117] "RemoveContainer" containerID="d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237" Feb 17 00:47:04 crc kubenswrapper[4805]: E0217 00:47:04.306900 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237\": container with ID starting with d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237 not found: ID does not exist" containerID="d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.306933 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237"} err="failed to get container status \"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237\": rpc error: code = NotFound desc = could not find container \"d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237\": container with ID starting with d8a871e27dbd62d29e465826a6732d38660aa185df2d3fac9d99613bdfc08237 not found: ID does not exist" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.559152 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.575785 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.587707 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:04 crc kubenswrapper[4805]: E0217 00:47:04.588239 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.588266 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" Feb 17 00:47:04 crc kubenswrapper[4805]: E0217 00:47:04.588296 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.588306 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.588566 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-metadata" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.588598 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" containerName="nova-metadata-log" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.589850 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.592307 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.594502 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.607755 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.685835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20146d61-c58a-4fbe-9cb8-9a11af3b159a-logs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.685921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.685965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.686018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnzv\" (UniqueName: \"kubernetes.io/projected/20146d61-c58a-4fbe-9cb8-9a11af3b159a-kube-api-access-zbnzv\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.686268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-config-data\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.788711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.788788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.788818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbnzv\" (UniqueName: \"kubernetes.io/projected/20146d61-c58a-4fbe-9cb8-9a11af3b159a-kube-api-access-zbnzv\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.788959 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-config-data\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.789137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20146d61-c58a-4fbe-9cb8-9a11af3b159a-logs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.789678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20146d61-c58a-4fbe-9cb8-9a11af3b159a-logs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.793639 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.794909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.796230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20146d61-c58a-4fbe-9cb8-9a11af3b159a-config-data\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.808858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbnzv\" (UniqueName: \"kubernetes.io/projected/20146d61-c58a-4fbe-9cb8-9a11af3b159a-kube-api-access-zbnzv\") pod \"nova-metadata-0\" (UID: \"20146d61-c58a-4fbe-9cb8-9a11af3b159a\") " pod="openstack/nova-metadata-0" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.812936 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2936e576-b736-4e51-af25-bf06d2959067" path="/var/lib/kubelet/pods/2936e576-b736-4e51-af25-bf06d2959067/volumes" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.814372 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef39d973-397f-4d39-9e6a-7debbc762911" path="/var/lib/kubelet/pods/ef39d973-397f-4d39-9e6a-7debbc762911/volumes" Feb 17 00:47:04 crc kubenswrapper[4805]: I0217 00:47:04.969136 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 00:47:05 crc kubenswrapper[4805]: I0217 00:47:05.237917 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ad4deee6-2619-4e76-9a81-9adbaa868ee2","Type":"ContainerStarted","Data":"36917e6de23f5f68f93d2b9aabdac8dcd94c337c167820d047a5ec536f31897c"} Feb 17 00:47:05 crc kubenswrapper[4805]: I0217 00:47:05.238313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ad4deee6-2619-4e76-9a81-9adbaa868ee2","Type":"ContainerStarted","Data":"b4091fab497b1fae0de8675f95aee47377852388ff9632bff709cfbb3c1b43d0"} Feb 17 00:47:05 crc kubenswrapper[4805]: I0217 00:47:05.262617 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.26259606 podStartE2EDuration="2.26259606s" podCreationTimestamp="2026-02-17 00:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:47:05.255450731 +0000 UTC m=+1451.271260139" watchObservedRunningTime="2026-02-17 00:47:05.26259606 +0000 UTC m=+1451.278405458" Feb 17 00:47:05 crc kubenswrapper[4805]: I0217 00:47:05.513976 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 00:47:05 crc kubenswrapper[4805]: W0217 00:47:05.530012 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20146d61_c58a_4fbe_9cb8_9a11af3b159a.slice/crio-d7ed26e1406d7426fa7d19c3e5e605b05dd38af5c57ec7e928380f37c1f43d32 WatchSource:0}: Error finding container d7ed26e1406d7426fa7d19c3e5e605b05dd38af5c57ec7e928380f37c1f43d32: Status 404 returned error can't find the container with id d7ed26e1406d7426fa7d19c3e5e605b05dd38af5c57ec7e928380f37c1f43d32 Feb 17 00:47:06 crc kubenswrapper[4805]: I0217 00:47:06.257972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20146d61-c58a-4fbe-9cb8-9a11af3b159a","Type":"ContainerStarted","Data":"30670fdd24405b8a4e233f1b2d4a1e12c8f4d28747d8d431d5f799305639d6f7"} Feb 17 00:47:06 crc kubenswrapper[4805]: I0217 00:47:06.258208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20146d61-c58a-4fbe-9cb8-9a11af3b159a","Type":"ContainerStarted","Data":"c3137c53e2fd4e8d84b0a94f3d49d9129177a6a714bb329ce2e56d75beccd0ef"} Feb 17 00:47:06 crc kubenswrapper[4805]: I0217 00:47:06.258217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"20146d61-c58a-4fbe-9cb8-9a11af3b159a","Type":"ContainerStarted","Data":"d7ed26e1406d7426fa7d19c3e5e605b05dd38af5c57ec7e928380f37c1f43d32"} Feb 17 00:47:06 crc kubenswrapper[4805]: I0217 00:47:06.297089 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.297073967 podStartE2EDuration="2.297073967s" podCreationTimestamp="2026-02-17 00:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:47:06.295957056 +0000 UTC m=+1452.311766454" watchObservedRunningTime="2026-02-17 00:47:06.297073967 +0000 UTC m=+1452.312883365" Feb 17 00:47:08 crc kubenswrapper[4805]: I0217 00:47:08.584283 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 00:47:09 crc kubenswrapper[4805]: I0217 00:47:09.970908 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 00:47:09 crc kubenswrapper[4805]: I0217 00:47:09.972142 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 00:47:11 crc kubenswrapper[4805]: I0217 00:47:11.576225 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:47:11 crc kubenswrapper[4805]: I0217 00:47:11.576298 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 00:47:12 crc kubenswrapper[4805]: I0217 00:47:12.594594 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a6481c50-bc40-4ee2-a161-127c2d2d23df" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.244:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 00:47:12 crc kubenswrapper[4805]: I0217 00:47:12.595231 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a6481c50-bc40-4ee2-a161-127c2d2d23df" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.244:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 00:47:13 crc kubenswrapper[4805]: I0217 00:47:13.584192 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 00:47:13 crc kubenswrapper[4805]: I0217 00:47:13.635132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 00:47:14 crc kubenswrapper[4805]: I0217 00:47:14.485586 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 00:47:14 crc kubenswrapper[4805]: I0217 00:47:14.970060 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 00:47:14 crc kubenswrapper[4805]: I0217 00:47:14.970107 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 00:47:15 crc kubenswrapper[4805]: I0217 00:47:15.984570 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="20146d61-c58a-4fbe-9cb8-9a11af3b159a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 00:47:15 crc kubenswrapper[4805]: I0217 00:47:15.984598 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="20146d61-c58a-4fbe-9cb8-9a11af3b159a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.246:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 00:47:20 crc kubenswrapper[4805]: I0217 00:47:20.488371 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 00:47:21 crc kubenswrapper[4805]: I0217 00:47:21.586059 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 00:47:21 crc kubenswrapper[4805]: I0217 00:47:21.586808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 00:47:21 crc kubenswrapper[4805]: I0217 00:47:21.589431 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 00:47:21 crc kubenswrapper[4805]: I0217 00:47:21.596823 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 00:47:22 crc kubenswrapper[4805]: I0217 00:47:22.549268 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 00:47:22 crc kubenswrapper[4805]: I0217 00:47:22.555796 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 00:47:23 crc kubenswrapper[4805]: I0217 00:47:23.077493 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:47:23 crc kubenswrapper[4805]: I0217 00:47:23.077823 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:47:24 crc kubenswrapper[4805]: I0217 00:47:24.975290 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 00:47:24 crc kubenswrapper[4805]: I0217 00:47:24.981435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 00:47:24 crc kubenswrapper[4805]: I0217 00:47:24.982905 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 00:47:25 crc kubenswrapper[4805]: I0217 00:47:25.593207 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 00:47:34 crc kubenswrapper[4805]: I0217 00:47:34.980261 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-ztgpf"] Feb 17 00:47:34 crc kubenswrapper[4805]: I0217 00:47:34.991529 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-ztgpf"] Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.092760 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-tvlw9"] Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.094252 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.111557 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tvlw9"] Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.189678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-config-data\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.189981 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-combined-ca-bundle\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.190090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2vq\" (UniqueName: \"kubernetes.io/projected/70acc4f3-ace6-4366-9270-6bd9242da91b-kube-api-access-gt2vq\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.291979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-combined-ca-bundle\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.292310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2vq\" (UniqueName: \"kubernetes.io/projected/70acc4f3-ace6-4366-9270-6bd9242da91b-kube-api-access-gt2vq\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.292452 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-config-data\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.310150 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-combined-ca-bundle\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.311057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70acc4f3-ace6-4366-9270-6bd9242da91b-config-data\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.311928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2vq\" (UniqueName: \"kubernetes.io/projected/70acc4f3-ace6-4366-9270-6bd9242da91b-kube-api-access-gt2vq\") pod \"heat-db-sync-tvlw9\" (UID: \"70acc4f3-ace6-4366-9270-6bd9242da91b\") " pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.428169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-tvlw9" Feb 17 00:47:35 crc kubenswrapper[4805]: W0217 00:47:35.965917 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70acc4f3_ace6_4366_9270_6bd9242da91b.slice/crio-e4d72af823247dce37813861af1f281c76c611748a088a3bcca94b7dce718f8c WatchSource:0}: Error finding container e4d72af823247dce37813861af1f281c76c611748a088a3bcca94b7dce718f8c: Status 404 returned error can't find the container with id e4d72af823247dce37813861af1f281c76c611748a088a3bcca94b7dce718f8c Feb 17 00:47:35 crc kubenswrapper[4805]: I0217 00:47:35.970863 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-tvlw9"] Feb 17 00:47:36 crc kubenswrapper[4805]: E0217 00:47:36.094376 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:47:36 crc kubenswrapper[4805]: E0217 00:47:36.094454 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:47:36 crc kubenswrapper[4805]: E0217 00:47:36.094607 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:47:36 crc kubenswrapper[4805]: E0217 00:47:36.095835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:47:36 crc kubenswrapper[4805]: I0217 00:47:36.719734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-tvlw9" event={"ID":"70acc4f3-ace6-4366-9270-6bd9242da91b","Type":"ContainerStarted","Data":"e4d72af823247dce37813861af1f281c76c611748a088a3bcca94b7dce718f8c"} Feb 17 00:47:36 crc kubenswrapper[4805]: E0217 00:47:36.721821 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:47:36 crc kubenswrapper[4805]: I0217 00:47:36.799701 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacb9ef7-b269-44c2-9b51-62067ea3545b" path="/var/lib/kubelet/pods/aacb9ef7-b269-44c2-9b51-62067ea3545b/volumes" Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.087648 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.088002 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-central-agent" containerID="cri-o://e926f9924473eff08fe262e6df894ff328407d82072b25773d16d9854397d722" gracePeriod=30 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.088066 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="sg-core" containerID="cri-o://ec49b0f8d358830df6e4c2847b0efbe4ca099ea1ca72b312be86054dc6d91659" gracePeriod=30 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.088072 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-notification-agent" containerID="cri-o://94f88c087d451b909e3b5f712ea7d45c1990589e85bab58f20ae21d31efff3c0" gracePeriod=30 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.088142 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="proxy-httpd" containerID="cri-o://a3bf4eaf6845bb8bc7a63f36847355f1129d1065934ae27afdd6fad8ce4d6068" gracePeriod=30 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.116875 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.733303 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4d499ab-baf7-4e88-8631-38170125d756" containerID="a3bf4eaf6845bb8bc7a63f36847355f1129d1065934ae27afdd6fad8ce4d6068" exitCode=0 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.733617 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4d499ab-baf7-4e88-8631-38170125d756" containerID="ec49b0f8d358830df6e4c2847b0efbe4ca099ea1ca72b312be86054dc6d91659" exitCode=2 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.733629 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4d499ab-baf7-4e88-8631-38170125d756" containerID="e926f9924473eff08fe262e6df894ff328407d82072b25773d16d9854397d722" exitCode=0 Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.733369 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerDied","Data":"a3bf4eaf6845bb8bc7a63f36847355f1129d1065934ae27afdd6fad8ce4d6068"} Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.733978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerDied","Data":"ec49b0f8d358830df6e4c2847b0efbe4ca099ea1ca72b312be86054dc6d91659"} Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.734050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerDied","Data":"e926f9924473eff08fe262e6df894ff328407d82072b25773d16d9854397d722"} Feb 17 00:47:37 crc kubenswrapper[4805]: E0217 00:47:37.735507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:47:37 crc kubenswrapper[4805]: I0217 00:47:37.858887 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.772199 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4d499ab-baf7-4e88-8631-38170125d756" containerID="94f88c087d451b909e3b5f712ea7d45c1990589e85bab58f20ae21d31efff3c0" exitCode=0 Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.772925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerDied","Data":"94f88c087d451b909e3b5f712ea7d45c1990589e85bab58f20ae21d31efff3c0"} Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.773729 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4d499ab-baf7-4e88-8631-38170125d756","Type":"ContainerDied","Data":"9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f"} Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.773819 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9f951a4793396b5b029ca8edfc89af0216266cf7e646f8fdcacd506c129c4f" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.832498 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.913733 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.913925 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914253 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914330 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.914590 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbmb\" (UniqueName: \"kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb\") pod \"b4d499ab-baf7-4e88-8631-38170125d756\" (UID: \"b4d499ab-baf7-4e88-8631-38170125d756\") " Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.915140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.915711 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.915730 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4d499ab-baf7-4e88-8631-38170125d756-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.919460 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb" (OuterVolumeSpecName: "kube-api-access-zrbmb") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "kube-api-access-zrbmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.919785 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts" (OuterVolumeSpecName: "scripts") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:40 crc kubenswrapper[4805]: I0217 00:47:40.950689 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.003016 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.007774 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.018134 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.018170 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.018185 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.018226 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.018237 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbmb\" (UniqueName: \"kubernetes.io/projected/b4d499ab-baf7-4e88-8631-38170125d756-kube-api-access-zrbmb\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.034619 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data" (OuterVolumeSpecName: "config-data") pod "b4d499ab-baf7-4e88-8631-38170125d756" (UID: "b4d499ab-baf7-4e88-8631-38170125d756"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.120412 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4d499ab-baf7-4e88-8631-38170125d756-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.221965 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="rabbitmq" containerID="cri-o://596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e" gracePeriod=604796 Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.782950 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.816481 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.827001 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.851696 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:41 crc kubenswrapper[4805]: E0217 00:47:41.852234 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-notification-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852257 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-notification-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: E0217 00:47:41.852276 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-central-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852284 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-central-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: E0217 00:47:41.852312 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="proxy-httpd" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852323 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="proxy-httpd" Feb 17 00:47:41 crc kubenswrapper[4805]: E0217 00:47:41.852350 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="sg-core" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852358 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="sg-core" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852573 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-central-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852596 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="sg-core" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852612 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="proxy-httpd" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.852626 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d499ab-baf7-4e88-8631-38170125d756" containerName="ceilometer-notification-agent" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.865731 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.865836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.868798 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.868917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.869737 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938522 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-log-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmt4\" (UniqueName: \"kubernetes.io/projected/78cfb873-5ac3-472d-91e4-299e5df21da3-kube-api-access-7bmt4\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-config-data\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.938954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-run-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.939115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:41 crc kubenswrapper[4805]: I0217 00:47:41.939422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-scripts\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.041788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042202 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-scripts\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-log-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bmt4\" (UniqueName: \"kubernetes.io/projected/78cfb873-5ac3-472d-91e4-299e5df21da3-kube-api-access-7bmt4\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-config-data\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.042885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-log-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.043243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-run-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.043620 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78cfb873-5ac3-472d-91e4-299e5df21da3-run-httpd\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.046112 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-scripts\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.046253 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.058405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-config-data\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.061809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.073614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cfb873-5ac3-472d-91e4-299e5df21da3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.081464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bmt4\" (UniqueName: \"kubernetes.io/projected/78cfb873-5ac3-472d-91e4-299e5df21da3-kube-api-access-7bmt4\") pod \"ceilometer-0\" (UID: \"78cfb873-5ac3-472d-91e4-299e5df21da3\") " pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.185721 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.196891 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="rabbitmq" containerID="cri-o://8f7b5996bb3baf66a48bffeafa69160fb68716c1e0a3995629306da5bb81fb20" gracePeriod=604796 Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.694544 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.800873 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d499ab-baf7-4e88-8631-38170125d756" path="/var/lib/kubelet/pods/b4d499ab-baf7-4e88-8631-38170125d756/volumes" Feb 17 00:47:42 crc kubenswrapper[4805]: I0217 00:47:42.801890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78cfb873-5ac3-472d-91e4-299e5df21da3","Type":"ContainerStarted","Data":"b1c7d0e6adf4f39acc1b54b0ce0b689c01a20500b279f7191cfe5776c580a59e"} Feb 17 00:47:42 crc kubenswrapper[4805]: E0217 00:47:42.832061 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:47:42 crc kubenswrapper[4805]: E0217 00:47:42.832141 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:47:42 crc kubenswrapper[4805]: E0217 00:47:42.832356 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:47:43 crc kubenswrapper[4805]: I0217 00:47:43.813207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78cfb873-5ac3-472d-91e4-299e5df21da3","Type":"ContainerStarted","Data":"20269a364cf215ecfc2a8e6164a3a1d6d5e340d77893eb09bf33b5ba024178d0"} Feb 17 00:47:44 crc kubenswrapper[4805]: I0217 00:47:44.847573 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78cfb873-5ac3-472d-91e4-299e5df21da3","Type":"ContainerStarted","Data":"103ecc9386b7ea6d285cfc773220a04acb6f89d84b6abe6d902f44976d393ee0"} Feb 17 00:47:45 crc kubenswrapper[4805]: E0217 00:47:45.450761 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:47:45 crc kubenswrapper[4805]: I0217 00:47:45.865748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78cfb873-5ac3-472d-91e4-299e5df21da3","Type":"ContainerStarted","Data":"98be843884cb7459da0cbc6f40da2a041c1caf841b5789b21e61a26a084b590b"} Feb 17 00:47:45 crc kubenswrapper[4805]: E0217 00:47:45.873680 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:47:46 crc kubenswrapper[4805]: I0217 00:47:46.876388 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 00:47:46 crc kubenswrapper[4805]: E0217 00:47:46.878587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.641646 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.644400 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.666609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.681153 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.681230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.681357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzpx\" (UniqueName: \"kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.782956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.783064 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzpx\" (UniqueName: \"kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.783146 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.783763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.783634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.810841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzpx\" (UniqueName: \"kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx\") pod \"community-operators-lswkt\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.918605 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.919919 4805 generic.go:334] "Generic (PLEG): container finished" podID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerID="596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e" exitCode=0 Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.920755 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerDied","Data":"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e"} Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.920810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2ca81e9-e569-4f1b-afcc-be3e47407114","Type":"ContainerDied","Data":"44cd75bca52272431f59ccf51860d68ab7b88ca516662a4b55a8c117ea2585a7"} Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.921090 4805 scope.go:117] "RemoveContainer" containerID="596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e" Feb 17 00:47:47 crc kubenswrapper[4805]: E0217 00:47:47.971588 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.972180 4805 scope.go:117] "RemoveContainer" containerID="d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.977903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrwhv\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987753 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.987781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.988522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.993598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.993685 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.993724 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:47 crc kubenswrapper[4805]: I0217 00:47:47.993765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret\") pod \"e2ca81e9-e569-4f1b-afcc-be3e47407114\" (UID: \"e2ca81e9-e569-4f1b-afcc-be3e47407114\") " Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.000508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.001593 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.003186 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv" (OuterVolumeSpecName: "kube-api-access-vrwhv") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "kube-api-access-vrwhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.008932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info" (OuterVolumeSpecName: "pod-info") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.009566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.025988 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.029975 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.034429 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.072074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data" (OuterVolumeSpecName: "config-data") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097283 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097313 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrwhv\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-kube-api-access-vrwhv\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097342 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097353 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097363 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097372 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097396 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097406 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2ca81e9-e569-4f1b-afcc-be3e47407114-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.097417 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2ca81e9-e569-4f1b-afcc-be3e47407114-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.123568 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.140906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf" (OuterVolumeSpecName: "server-conf") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.182639 4805 scope.go:117] "RemoveContainer" containerID="596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e" Feb 17 00:47:48 crc kubenswrapper[4805]: E0217 00:47:48.194052 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e\": container with ID starting with 596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e not found: ID does not exist" containerID="596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.194341 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e"} err="failed to get container status \"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e\": rpc error: code = NotFound desc = could not find container \"596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e\": container with ID starting with 596840d7e4f40f46bfabc593fd68a5701e387aa237da0572f25210f4bf132d5e not found: ID does not exist" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.194370 4805 scope.go:117] "RemoveContainer" containerID="d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.205412 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.205442 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2ca81e9-e569-4f1b-afcc-be3e47407114-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: E0217 00:47:48.205538 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c\": container with ID starting with d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c not found: ID does not exist" containerID="d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.205564 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c"} err="failed to get container status \"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c\": rpc error: code = NotFound desc = could not find container \"d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c\": container with ID starting with d94a75183a262d8a0e193ca975a8bab3fcca110a58138c8ad09f4c39ea12362c not found: ID does not exist" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.251562 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e2ca81e9-e569-4f1b-afcc-be3e47407114" (UID: "e2ca81e9-e569-4f1b-afcc-be3e47407114"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.312717 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2ca81e9-e569-4f1b-afcc-be3e47407114-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.607806 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:47:48 crc kubenswrapper[4805]: W0217 00:47:48.611632 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe3c965b_0000_4121_96e3_28e6ff25b1b7.slice/crio-3789bb112c360b50075a334ed2adaebf72bcb10c28a893995ac45e38d48f0692 WatchSource:0}: Error finding container 3789bb112c360b50075a334ed2adaebf72bcb10c28a893995ac45e38d48f0692: Status 404 returned error can't find the container with id 3789bb112c360b50075a334ed2adaebf72bcb10c28a893995ac45e38d48f0692 Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.932220 4805 generic.go:334] "Generic (PLEG): container finished" podID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerID="ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503" exitCode=0 Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.932269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerDied","Data":"ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503"} Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.932574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerStarted","Data":"3789bb112c360b50075a334ed2adaebf72bcb10c28a893995ac45e38d48f0692"} Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.934674 4805 generic.go:334] "Generic (PLEG): container finished" podID="dc55b214-5b43-49cd-aadb-967188b34da1" containerID="8f7b5996bb3baf66a48bffeafa69160fb68716c1e0a3995629306da5bb81fb20" exitCode=0 Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.934719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerDied","Data":"8f7b5996bb3baf66a48bffeafa69160fb68716c1e0a3995629306da5bb81fb20"} Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.934742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dc55b214-5b43-49cd-aadb-967188b34da1","Type":"ContainerDied","Data":"13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6"} Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.934752 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b813c0bfb537ed9e25aa6071f6f1c024f26c0936d8369ff8012c3cd7befba6" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.939769 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.960402 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.977218 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:48 crc kubenswrapper[4805]: I0217 00:47:48.986975 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.028712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.028896 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8l9f\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029022 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029309 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029423 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029503 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029745 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.029830 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dc55b214-5b43-49cd-aadb-967188b34da1\" (UID: \"dc55b214-5b43-49cd-aadb-967188b34da1\") " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.031076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.034061 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.034512 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.040917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.043297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f" (OuterVolumeSpecName: "kube-api-access-h8l9f") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "kube-api-access-h8l9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.058690 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.059215 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.060510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info" (OuterVolumeSpecName: "pod-info") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.073926 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.074428 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074452 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.074473 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="setup-container" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074484 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="setup-container" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.074506 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074514 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.074540 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="setup-container" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074549 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="setup-container" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074819 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.074843 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="rabbitmq" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.078759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.087653 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zvbqj" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.087871 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.087966 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.088088 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.088855 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data" (OuterVolumeSpecName: "config-data") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.088988 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.089203 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.089977 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.090132 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.131938 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.131988 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-config-data\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmh6\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-kube-api-access-hgmh6\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132392 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132430 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132634 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132676 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132877 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dc55b214-5b43-49cd-aadb-967188b34da1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132912 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8l9f\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-kube-api-access-h8l9f\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132925 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132938 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132953 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132965 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132975 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dc55b214-5b43-49cd-aadb-967188b34da1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.132986 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.133011 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.152156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf" (OuterVolumeSpecName: "server-conf") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.168661 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.205599 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dc55b214-5b43-49cd-aadb-967188b34da1" (UID: "dc55b214-5b43-49cd-aadb-967188b34da1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.235014 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.235098 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-config-data\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.235179 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.235248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.235309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.234590 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.236158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.237468 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.237864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-config-data\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.238749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.238871 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmh6\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-kube-api-access-hgmh6\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.239061 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dc55b214-5b43-49cd-aadb-967188b34da1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.239078 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dc55b214-5b43-49cd-aadb-967188b34da1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.239088 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.242113 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.250573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.250765 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.258022 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmh6\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-kube-api-access-hgmh6\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.267840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd9b570-6f4d-49b9-96a4-54bb6744ea22-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.276401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"1fd9b570-6f4d-49b9-96a4-54bb6744ea22\") " pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.410991 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.887631 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.888117 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.888236 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:47:49 crc kubenswrapper[4805]: E0217 00:47:49.889456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.933645 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.953363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerStarted","Data":"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034"} Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.955149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1fd9b570-6f4d-49b9-96a4-54bb6744ea22","Type":"ContainerStarted","Data":"e991a9c590eea6269ba4d9e73b1a256a5f2a0a53db545aebd2999925bf2c9a10"} Feb 17 00:47:49 crc kubenswrapper[4805]: I0217 00:47:49.955183 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.124151 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.142165 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.167911 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.170199 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.172179 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.172376 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.176132 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.176694 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.176705 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.176751 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.177009 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-djq6d" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.177070 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.362857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363042 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363061 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strp5\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-kube-api-access-strp5\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363104 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d97e2601-4fd8-4dbf-bef1-c8483ba79667-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363150 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.363228 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d97e2601-4fd8-4dbf-bef1-c8483ba79667-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d97e2601-4fd8-4dbf-bef1-c8483ba79667-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strp5\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-kube-api-access-strp5\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d97e2601-4fd8-4dbf-bef1-c8483ba79667-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465816 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.465843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.466759 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.467263 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.467428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.467525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.469923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d97e2601-4fd8-4dbf-bef1-c8483ba79667-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.470640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d97e2601-4fd8-4dbf-bef1-c8483ba79667-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.472794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.484138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d97e2601-4fd8-4dbf-bef1-c8483ba79667-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.486302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.487902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strp5\" (UniqueName: \"kubernetes.io/projected/d97e2601-4fd8-4dbf-bef1-c8483ba79667-kube-api-access-strp5\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.520271 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d97e2601-4fd8-4dbf-bef1-c8483ba79667\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.798537 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" path="/var/lib/kubelet/pods/dc55b214-5b43-49cd-aadb-967188b34da1/volumes" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.799432 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ca81e9-e569-4f1b-afcc-be3e47407114" path="/var/lib/kubelet/pods/e2ca81e9-e569-4f1b-afcc-be3e47407114/volumes" Feb 17 00:47:50 crc kubenswrapper[4805]: I0217 00:47:50.802463 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:47:51 crc kubenswrapper[4805]: I0217 00:47:51.953672 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 00:47:51 crc kubenswrapper[4805]: I0217 00:47:51.981070 4805 generic.go:334] "Generic (PLEG): container finished" podID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerID="33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034" exitCode=0 Feb 17 00:47:51 crc kubenswrapper[4805]: I0217 00:47:51.981166 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerDied","Data":"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034"} Feb 17 00:47:51 crc kubenswrapper[4805]: I0217 00:47:51.982996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d97e2601-4fd8-4dbf-bef1-c8483ba79667","Type":"ContainerStarted","Data":"6966d86cfd7f23e7c75e94fef28a3f0ce5ef6f36e58425c70cdc7490d6fbb748"} Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.441913 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-68p4z"] Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.443955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.446595 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.470073 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-68p4z"] Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513254 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjzl\" (UniqueName: \"kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513408 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513424 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.513507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.602102 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-68p4z"] Feb 17 00:47:52 crc kubenswrapper[4805]: E0217 00:47:52.606670 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-7zjzl openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" podUID="b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjzl\" (UniqueName: \"kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.615397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616275 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616288 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616583 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.616917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.633587 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-56cm5"] Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.635408 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.669467 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjzl\" (UniqueName: \"kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl\") pod \"dnsmasq-dns-7d84b4d45c-68p4z\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.686209 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-56cm5"] Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-svc\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nph6\" (UniqueName: \"kubernetes.io/projected/c3625ac6-5d39-453f-9237-65cde10f4733-kube-api-access-4nph6\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-config\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.725749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827241 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-svc\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827417 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nph6\" (UniqueName: \"kubernetes.io/projected/c3625ac6-5d39-453f-9237-65cde10f4733-kube-api-access-4nph6\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-config\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.827513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.828469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.829021 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.829542 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.829777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-dns-svc\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.830284 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-config\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.830937 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3625ac6-5d39-453f-9237-65cde10f4733-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.853101 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nph6\" (UniqueName: \"kubernetes.io/projected/c3625ac6-5d39-453f-9237-65cde10f4733-kube-api-access-4nph6\") pod \"dnsmasq-dns-6559847fc9-56cm5\" (UID: \"c3625ac6-5d39-453f-9237-65cde10f4733\") " pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:52 crc kubenswrapper[4805]: I0217 00:47:52.970595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.011629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerStarted","Data":"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7"} Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.020123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1fd9b570-6f4d-49b9-96a4-54bb6744ea22","Type":"ContainerStarted","Data":"9b8205b229da205adc0d3ab7068aa227deea8b5ad83da893c85577cb167063dc"} Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.020187 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.033034 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lswkt" podStartSLOduration=2.545033498 podStartE2EDuration="6.03301772s" podCreationTimestamp="2026-02-17 00:47:47 +0000 UTC" firstStartedPulling="2026-02-17 00:47:48.934841612 +0000 UTC m=+1494.950651030" lastFinishedPulling="2026-02-17 00:47:52.422825854 +0000 UTC m=+1498.438635252" observedRunningTime="2026-02-17 00:47:53.029712218 +0000 UTC m=+1499.045521616" watchObservedRunningTime="2026-02-17 00:47:53.03301772 +0000 UTC m=+1499.048827118" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.067500 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.077647 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.077704 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.077745 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.078505 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.078561 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" gracePeriod=600 Feb 17 00:47:53 crc kubenswrapper[4805]: E0217 00:47:53.202063 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235569 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zjzl\" (UniqueName: \"kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb\") pod \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\" (UID: \"b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1\") " Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.235957 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236226 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236511 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236740 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.236793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config" (OuterVolumeSpecName: "config") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.251099 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl" (OuterVolumeSpecName: "kube-api-access-7zjzl") pod "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" (UID: "b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1"). InnerVolumeSpecName "kube-api-access-7zjzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338275 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338530 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338540 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338550 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zjzl\" (UniqueName: \"kubernetes.io/projected/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-kube-api-access-7zjzl\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338558 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.338565 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.489076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-56cm5"] Feb 17 00:47:53 crc kubenswrapper[4805]: I0217 00:47:53.679565 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dc55b214-5b43-49cd-aadb-967188b34da1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.119:5671: i/o timeout" Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.648536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d97e2601-4fd8-4dbf-bef1-c8483ba79667","Type":"ContainerStarted","Data":"a57be743171df3676cfa8806e2faace2aa99c9bf7f9ac9a02de9b1bb42c8528c"} Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.657044 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" exitCode=0 Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.657145 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b"} Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.657183 4805 scope.go:117] "RemoveContainer" containerID="e2ac2cae8d5d1427fe9596d0b76a1c102de0e2b3a3a542a90b4c3a31f375825b" Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.657781 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:47:54 crc kubenswrapper[4805]: E0217 00:47:54.658097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.661004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" event={"ID":"c3625ac6-5d39-453f-9237-65cde10f4733","Type":"ContainerStarted","Data":"385df3655dd3aafd8df8e2038978a410610feefc8a6f67e64f1380be69f68883"} Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.661028 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" event={"ID":"c3625ac6-5d39-453f-9237-65cde10f4733","Type":"ContainerStarted","Data":"ed2d413bc5564bfa7940a4e9ebe65854304e2546ba97f3f3ea1233a3408bd02a"} Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.661481 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-68p4z" Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.935383 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-68p4z"] Feb 17 00:47:54 crc kubenswrapper[4805]: I0217 00:47:54.943380 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-68p4z"] Feb 17 00:47:55 crc kubenswrapper[4805]: I0217 00:47:55.680595 4805 generic.go:334] "Generic (PLEG): container finished" podID="c3625ac6-5d39-453f-9237-65cde10f4733" containerID="385df3655dd3aafd8df8e2038978a410610feefc8a6f67e64f1380be69f68883" exitCode=0 Feb 17 00:47:55 crc kubenswrapper[4805]: I0217 00:47:55.680713 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" event={"ID":"c3625ac6-5d39-453f-9237-65cde10f4733","Type":"ContainerDied","Data":"385df3655dd3aafd8df8e2038978a410610feefc8a6f67e64f1380be69f68883"} Feb 17 00:47:55 crc kubenswrapper[4805]: I0217 00:47:55.681784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:47:55 crc kubenswrapper[4805]: I0217 00:47:55.681797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" event={"ID":"c3625ac6-5d39-453f-9237-65cde10f4733","Type":"ContainerStarted","Data":"124772284cb41e26a62a1d96c9c7e102cc8c92ac739004c07107dfa743f4a3cc"} Feb 17 00:47:55 crc kubenswrapper[4805]: I0217 00:47:55.728905 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" podStartSLOduration=3.728878272 podStartE2EDuration="3.728878272s" podCreationTimestamp="2026-02-17 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:47:55.718781871 +0000 UTC m=+1501.734591309" watchObservedRunningTime="2026-02-17 00:47:55.728878272 +0000 UTC m=+1501.744687700" Feb 17 00:47:56 crc kubenswrapper[4805]: I0217 00:47:56.800406 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1" path="/var/lib/kubelet/pods/b4dc4b6a-3026-47b5-be70-2bbb24fbf5c1/volumes" Feb 17 00:47:57 crc kubenswrapper[4805]: I0217 00:47:57.979856 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:57 crc kubenswrapper[4805]: I0217 00:47:57.980808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:58 crc kubenswrapper[4805]: I0217 00:47:58.065681 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:58 crc kubenswrapper[4805]: I0217 00:47:58.831056 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 00:47:58 crc kubenswrapper[4805]: I0217 00:47:58.831132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:47:58 crc kubenswrapper[4805]: E0217 00:47:58.919085 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:47:58 crc kubenswrapper[4805]: E0217 00:47:58.919170 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:47:58 crc kubenswrapper[4805]: E0217 00:47:58.919399 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:47:58 crc kubenswrapper[4805]: E0217 00:47:58.921925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:47:58 crc kubenswrapper[4805]: I0217 00:47:58.958450 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:47:59 crc kubenswrapper[4805]: E0217 00:47:59.725027 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:48:00 crc kubenswrapper[4805]: I0217 00:48:00.736379 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lswkt" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="registry-server" containerID="cri-o://a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7" gracePeriod=2 Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.295017 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.381940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities\") pod \"be3c965b-0000-4121-96e3-28e6ff25b1b7\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.382616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content\") pod \"be3c965b-0000-4121-96e3-28e6ff25b1b7\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.382991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzpx\" (UniqueName: \"kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx\") pod \"be3c965b-0000-4121-96e3-28e6ff25b1b7\" (UID: \"be3c965b-0000-4121-96e3-28e6ff25b1b7\") " Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.383066 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities" (OuterVolumeSpecName: "utilities") pod "be3c965b-0000-4121-96e3-28e6ff25b1b7" (UID: "be3c965b-0000-4121-96e3-28e6ff25b1b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.386295 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.388998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx" (OuterVolumeSpecName: "kube-api-access-nmzpx") pod "be3c965b-0000-4121-96e3-28e6ff25b1b7" (UID: "be3c965b-0000-4121-96e3-28e6ff25b1b7"). InnerVolumeSpecName "kube-api-access-nmzpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.466827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be3c965b-0000-4121-96e3-28e6ff25b1b7" (UID: "be3c965b-0000-4121-96e3-28e6ff25b1b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.488901 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be3c965b-0000-4121-96e3-28e6ff25b1b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.488940 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmzpx\" (UniqueName: \"kubernetes.io/projected/be3c965b-0000-4121-96e3-28e6ff25b1b7-kube-api-access-nmzpx\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.751855 4805 generic.go:334] "Generic (PLEG): container finished" podID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerID="a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7" exitCode=0 Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.751921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerDied","Data":"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7"} Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.752763 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lswkt" event={"ID":"be3c965b-0000-4121-96e3-28e6ff25b1b7","Type":"ContainerDied","Data":"3789bb112c360b50075a334ed2adaebf72bcb10c28a893995ac45e38d48f0692"} Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.752797 4805 scope.go:117] "RemoveContainer" containerID="a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.751977 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lswkt" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.798199 4805 scope.go:117] "RemoveContainer" containerID="33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.805715 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.815711 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lswkt"] Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.829301 4805 scope.go:117] "RemoveContainer" containerID="ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.927906 4805 scope.go:117] "RemoveContainer" containerID="a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7" Feb 17 00:48:01 crc kubenswrapper[4805]: E0217 00:48:01.928657 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7\": container with ID starting with a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7 not found: ID does not exist" containerID="a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.928707 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7"} err="failed to get container status \"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7\": rpc error: code = NotFound desc = could not find container \"a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7\": container with ID starting with a0e3743fd89bc860fdaf8ffbac228dd2963e4d83a349b70574c7053ae9f3efc7 not found: ID does not exist" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.928743 4805 scope.go:117] "RemoveContainer" containerID="33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034" Feb 17 00:48:01 crc kubenswrapper[4805]: E0217 00:48:01.929119 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034\": container with ID starting with 33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034 not found: ID does not exist" containerID="33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.929153 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034"} err="failed to get container status \"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034\": rpc error: code = NotFound desc = could not find container \"33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034\": container with ID starting with 33ca6f4cc022810cc79a5616566d870b7fa4166bff1c3be967c18a322a0c9034 not found: ID does not exist" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.929173 4805 scope.go:117] "RemoveContainer" containerID="ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503" Feb 17 00:48:01 crc kubenswrapper[4805]: E0217 00:48:01.929501 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503\": container with ID starting with ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503 not found: ID does not exist" containerID="ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503" Feb 17 00:48:01 crc kubenswrapper[4805]: I0217 00:48:01.929527 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503"} err="failed to get container status \"ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503\": rpc error: code = NotFound desc = could not find container \"ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503\": container with ID starting with ab55db8089601461c04cbf4592595b16ceb4e006fde3aca9c17134ae51327503 not found: ID does not exist" Feb 17 00:48:02 crc kubenswrapper[4805]: E0217 00:48:02.786277 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:48:02 crc kubenswrapper[4805]: I0217 00:48:02.819474 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" path="/var/lib/kubelet/pods/be3c965b-0000-4121-96e3-28e6ff25b1b7/volumes" Feb 17 00:48:02 crc kubenswrapper[4805]: I0217 00:48:02.973571 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6559847fc9-56cm5" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.082614 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.082942 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="dnsmasq-dns" containerID="cri-o://2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d" gracePeriod=10 Feb 17 00:48:03 crc kubenswrapper[4805]: E0217 00:48:03.261129 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc232df1e_ad0d_4b23_9e2c_0c3494aee55b.slice/crio-conmon-2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.637638 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746135 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746279 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746333 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc8js\" (UniqueName: \"kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.746466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0\") pod \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\" (UID: \"c232df1e-ad0d-4b23-9e2c-0c3494aee55b\") " Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.754467 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js" (OuterVolumeSpecName: "kube-api-access-sc8js") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "kube-api-access-sc8js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.784920 4805 generic.go:334] "Generic (PLEG): container finished" podID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerID="2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d" exitCode=0 Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.785136 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" event={"ID":"c232df1e-ad0d-4b23-9e2c-0c3494aee55b","Type":"ContainerDied","Data":"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d"} Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.785215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" event={"ID":"c232df1e-ad0d-4b23-9e2c-0c3494aee55b","Type":"ContainerDied","Data":"dc853fe8bbc6ee4ff002909b0628f19119dbb1c4ba5db133e5e25e5e9c5d4d89"} Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.785296 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.785300 4805 scope.go:117] "RemoveContainer" containerID="2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.797569 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.797578 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.808260 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.823788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config" (OuterVolumeSpecName: "config") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.831986 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c232df1e-ad0d-4b23-9e2c-0c3494aee55b" (UID: "c232df1e-ad0d-4b23-9e2c-0c3494aee55b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849080 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849112 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849121 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849131 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849140 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.849148 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc8js\" (UniqueName: \"kubernetes.io/projected/c232df1e-ad0d-4b23-9e2c-0c3494aee55b-kube-api-access-sc8js\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.921723 4805 scope.go:117] "RemoveContainer" containerID="ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.943068 4805 scope.go:117] "RemoveContainer" containerID="2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d" Feb 17 00:48:03 crc kubenswrapper[4805]: E0217 00:48:03.943464 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d\": container with ID starting with 2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d not found: ID does not exist" containerID="2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.943497 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d"} err="failed to get container status \"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d\": rpc error: code = NotFound desc = could not find container \"2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d\": container with ID starting with 2bb9a1d1b35a1ac1756744816eb695245a58bbfcbfe2cf6a5f9591a42634268d not found: ID does not exist" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.943518 4805 scope.go:117] "RemoveContainer" containerID="ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18" Feb 17 00:48:03 crc kubenswrapper[4805]: E0217 00:48:03.943867 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18\": container with ID starting with ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18 not found: ID does not exist" containerID="ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18" Feb 17 00:48:03 crc kubenswrapper[4805]: I0217 00:48:03.943887 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18"} err="failed to get container status \"ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18\": rpc error: code = NotFound desc = could not find container \"ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18\": container with ID starting with ef0c5c0e727b33d9ee3186de834fe45f11461c4f204f06f8d722a471344f9b18 not found: ID does not exist" Feb 17 00:48:04 crc kubenswrapper[4805]: I0217 00:48:04.120181 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:48:04 crc kubenswrapper[4805]: I0217 00:48:04.132177 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-glxm7"] Feb 17 00:48:04 crc kubenswrapper[4805]: I0217 00:48:04.800958 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" path="/var/lib/kubelet/pods/c232df1e-ad0d-4b23-9e2c-0c3494aee55b/volumes" Feb 17 00:48:08 crc kubenswrapper[4805]: I0217 00:48:08.399953 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7bbf7cf9-glxm7" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.240:5353: i/o timeout" Feb 17 00:48:09 crc kubenswrapper[4805]: I0217 00:48:09.785375 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:48:09 crc kubenswrapper[4805]: E0217 00:48:09.785791 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:48:10 crc kubenswrapper[4805]: E0217 00:48:10.787634 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.623641 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx"] Feb 17 00:48:11 crc kubenswrapper[4805]: E0217 00:48:11.624430 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="dnsmasq-dns" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.624460 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="dnsmasq-dns" Feb 17 00:48:11 crc kubenswrapper[4805]: E0217 00:48:11.624481 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="extract-utilities" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.624495 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="extract-utilities" Feb 17 00:48:11 crc kubenswrapper[4805]: E0217 00:48:11.624517 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="registry-server" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.624530 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="registry-server" Feb 17 00:48:11 crc kubenswrapper[4805]: E0217 00:48:11.624592 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="extract-content" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.624606 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="extract-content" Feb 17 00:48:11 crc kubenswrapper[4805]: E0217 00:48:11.624660 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="init" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.624674 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="init" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.625048 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c232df1e-ad0d-4b23-9e2c-0c3494aee55b" containerName="dnsmasq-dns" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.625085 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="be3c965b-0000-4121-96e3-28e6ff25b1b7" containerName="registry-server" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.626422 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.628806 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.629104 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.629299 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.629621 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.659166 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx"] Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.736693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.736779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.736851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw82q\" (UniqueName: \"kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.736977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.839194 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.839260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.839313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw82q\" (UniqueName: \"kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.839363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.845835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.850971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.851224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.871152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw82q\" (UniqueName: \"kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:11 crc kubenswrapper[4805]: I0217 00:48:11.961139 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:12 crc kubenswrapper[4805]: I0217 00:48:12.544447 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx"] Feb 17 00:48:12 crc kubenswrapper[4805]: I0217 00:48:12.556628 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:48:12 crc kubenswrapper[4805]: I0217 00:48:12.898278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" event={"ID":"0fe4c30a-bcb1-429d-8796-a1bacaec3988","Type":"ContainerStarted","Data":"4197e10887ff122c3be0e3c7de4698b8d6de3179ad96adf2be13d332329b722b"} Feb 17 00:48:13 crc kubenswrapper[4805]: E0217 00:48:13.539474 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:15 crc kubenswrapper[4805]: I0217 00:48:15.022931 4805 scope.go:117] "RemoveContainer" containerID="937219e051ca008592afb84a19bc551c316843281575cc9779fe5a8e5ffe5bd5" Feb 17 00:48:16 crc kubenswrapper[4805]: E0217 00:48:16.933110 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:48:16 crc kubenswrapper[4805]: E0217 00:48:16.933670 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:48:16 crc kubenswrapper[4805]: E0217 00:48:16.933777 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:48:16 crc kubenswrapper[4805]: E0217 00:48:16.935117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.488447 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.493762 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.498682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.540714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.541106 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.541248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn648\" (UniqueName: \"kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.643068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.643160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.643191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn648\" (UniqueName: \"kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.645239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.645227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.674422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn648\" (UniqueName: \"kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648\") pod \"certified-operators-7dpbz\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:21 crc kubenswrapper[4805]: I0217 00:48:21.814951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:22 crc kubenswrapper[4805]: W0217 00:48:22.357164 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ca9a49_25f7_45d3_94d4_dfe9b9f49b8a.slice/crio-64c0ccd9ce87697deb53c62af92726f95ed6737bc8cc79a21c1804ef4c734220 WatchSource:0}: Error finding container 64c0ccd9ce87697deb53c62af92726f95ed6737bc8cc79a21c1804ef4c734220: Status 404 returned error can't find the container with id 64c0ccd9ce87697deb53c62af92726f95ed6737bc8cc79a21c1804ef4c734220 Feb 17 00:48:22 crc kubenswrapper[4805]: I0217 00:48:22.363232 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:22 crc kubenswrapper[4805]: I0217 00:48:22.785815 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:48:22 crc kubenswrapper[4805]: E0217 00:48:22.786154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:48:23 crc kubenswrapper[4805]: I0217 00:48:23.040676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" event={"ID":"0fe4c30a-bcb1-429d-8796-a1bacaec3988","Type":"ContainerStarted","Data":"113abf0282e9261b88dd68a68fea4ee38a266f7a63d47685213a9b3271f6ad14"} Feb 17 00:48:23 crc kubenswrapper[4805]: I0217 00:48:23.043146 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerID="fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537" exitCode=0 Feb 17 00:48:23 crc kubenswrapper[4805]: I0217 00:48:23.043218 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerDied","Data":"fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537"} Feb 17 00:48:23 crc kubenswrapper[4805]: I0217 00:48:23.043257 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerStarted","Data":"64c0ccd9ce87697deb53c62af92726f95ed6737bc8cc79a21c1804ef4c734220"} Feb 17 00:48:23 crc kubenswrapper[4805]: I0217 00:48:23.083867 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" podStartSLOduration=2.9596165770000002 podStartE2EDuration="12.08384223s" podCreationTimestamp="2026-02-17 00:48:11 +0000 UTC" firstStartedPulling="2026-02-17 00:48:12.556416377 +0000 UTC m=+1518.572225765" lastFinishedPulling="2026-02-17 00:48:21.68064202 +0000 UTC m=+1527.696451418" observedRunningTime="2026-02-17 00:48:23.063220326 +0000 UTC m=+1529.079029754" watchObservedRunningTime="2026-02-17 00:48:23.08384223 +0000 UTC m=+1529.099651668" Feb 17 00:48:23 crc kubenswrapper[4805]: E0217 00:48:23.808368 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:24 crc kubenswrapper[4805]: I0217 00:48:24.056875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerStarted","Data":"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a"} Feb 17 00:48:25 crc kubenswrapper[4805]: E0217 00:48:25.015293 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:48:25 crc kubenswrapper[4805]: E0217 00:48:25.015369 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:48:25 crc kubenswrapper[4805]: E0217 00:48:25.015512 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:48:25 crc kubenswrapper[4805]: E0217 00:48:25.017020 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:48:25 crc kubenswrapper[4805]: I0217 00:48:25.069092 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fd9b570-6f4d-49b9-96a4-54bb6744ea22" containerID="9b8205b229da205adc0d3ab7068aa227deea8b5ad83da893c85577cb167063dc" exitCode=0 Feb 17 00:48:25 crc kubenswrapper[4805]: I0217 00:48:25.070249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1fd9b570-6f4d-49b9-96a4-54bb6744ea22","Type":"ContainerDied","Data":"9b8205b229da205adc0d3ab7068aa227deea8b5ad83da893c85577cb167063dc"} Feb 17 00:48:26 crc kubenswrapper[4805]: I0217 00:48:26.082715 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerID="77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a" exitCode=0 Feb 17 00:48:26 crc kubenswrapper[4805]: I0217 00:48:26.082804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerDied","Data":"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a"} Feb 17 00:48:26 crc kubenswrapper[4805]: I0217 00:48:26.089625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1fd9b570-6f4d-49b9-96a4-54bb6744ea22","Type":"ContainerStarted","Data":"1c35a39674a0f4d42afb23b1ff2264533533531a8a737e1eecf47c3a2b47a4c5"} Feb 17 00:48:26 crc kubenswrapper[4805]: I0217 00:48:26.090006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 00:48:26 crc kubenswrapper[4805]: I0217 00:48:26.150204 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.150187605 podStartE2EDuration="38.150187605s" podCreationTimestamp="2026-02-17 00:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:48:26.135801674 +0000 UTC m=+1532.151611072" watchObservedRunningTime="2026-02-17 00:48:26.150187605 +0000 UTC m=+1532.165997003" Feb 17 00:48:27 crc kubenswrapper[4805]: I0217 00:48:27.103124 4805 generic.go:334] "Generic (PLEG): container finished" podID="d97e2601-4fd8-4dbf-bef1-c8483ba79667" containerID="a57be743171df3676cfa8806e2faace2aa99c9bf7f9ac9a02de9b1bb42c8528c" exitCode=0 Feb 17 00:48:27 crc kubenswrapper[4805]: I0217 00:48:27.103234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d97e2601-4fd8-4dbf-bef1-c8483ba79667","Type":"ContainerDied","Data":"a57be743171df3676cfa8806e2faace2aa99c9bf7f9ac9a02de9b1bb42c8528c"} Feb 17 00:48:27 crc kubenswrapper[4805]: I0217 00:48:27.106054 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerStarted","Data":"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04"} Feb 17 00:48:27 crc kubenswrapper[4805]: E0217 00:48:27.785711 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:48:27 crc kubenswrapper[4805]: I0217 00:48:27.812065 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dpbz" podStartSLOduration=3.371246415 podStartE2EDuration="6.812041744s" podCreationTimestamp="2026-02-17 00:48:21 +0000 UTC" firstStartedPulling="2026-02-17 00:48:23.044884135 +0000 UTC m=+1529.060693533" lastFinishedPulling="2026-02-17 00:48:26.485679434 +0000 UTC m=+1532.501488862" observedRunningTime="2026-02-17 00:48:27.170068514 +0000 UTC m=+1533.185877912" watchObservedRunningTime="2026-02-17 00:48:27.812041744 +0000 UTC m=+1533.827851152" Feb 17 00:48:28 crc kubenswrapper[4805]: I0217 00:48:28.121395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d97e2601-4fd8-4dbf-bef1-c8483ba79667","Type":"ContainerStarted","Data":"e8df3a29eb8b512ee58e334dd0bb8caa292a19c37a44677546cfbd678df2f0e4"} Feb 17 00:48:28 crc kubenswrapper[4805]: I0217 00:48:28.122166 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:48:28 crc kubenswrapper[4805]: I0217 00:48:28.161577 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.161556443 podStartE2EDuration="38.161556443s" podCreationTimestamp="2026-02-17 00:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:48:28.146829043 +0000 UTC m=+1534.162638441" watchObservedRunningTime="2026-02-17 00:48:28.161556443 +0000 UTC m=+1534.177365851" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.208757 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.211523 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.222928 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.350040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.350578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.350658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rk2\" (UniqueName: \"kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.453127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.453203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rk2\" (UniqueName: \"kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.453271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.453816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.453849 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.478260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rk2\" (UniqueName: \"kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2\") pod \"redhat-operators-bzgbz\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:30 crc kubenswrapper[4805]: I0217 00:48:30.540607 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:31 crc kubenswrapper[4805]: W0217 00:48:31.078487 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda36c2d5c_712c_4680_a1a0_f1140ee6fbc7.slice/crio-6d4117732f223f3b3264eba19dc67a86f1b8b8ea78dd3473a070ac7158f615a9 WatchSource:0}: Error finding container 6d4117732f223f3b3264eba19dc67a86f1b8b8ea78dd3473a070ac7158f615a9: Status 404 returned error can't find the container with id 6d4117732f223f3b3264eba19dc67a86f1b8b8ea78dd3473a070ac7158f615a9 Feb 17 00:48:31 crc kubenswrapper[4805]: I0217 00:48:31.087407 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:31 crc kubenswrapper[4805]: I0217 00:48:31.162391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerStarted","Data":"6d4117732f223f3b3264eba19dc67a86f1b8b8ea78dd3473a070ac7158f615a9"} Feb 17 00:48:31 crc kubenswrapper[4805]: I0217 00:48:31.815608 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:31 crc kubenswrapper[4805]: I0217 00:48:31.816406 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:31 crc kubenswrapper[4805]: I0217 00:48:31.865389 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:32 crc kubenswrapper[4805]: I0217 00:48:32.181355 4805 generic.go:334] "Generic (PLEG): container finished" podID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerID="36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f" exitCode=0 Feb 17 00:48:32 crc kubenswrapper[4805]: I0217 00:48:32.181442 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerDied","Data":"36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f"} Feb 17 00:48:32 crc kubenswrapper[4805]: I0217 00:48:32.254032 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:33 crc kubenswrapper[4805]: I0217 00:48:33.197488 4805 generic.go:334] "Generic (PLEG): container finished" podID="0fe4c30a-bcb1-429d-8796-a1bacaec3988" containerID="113abf0282e9261b88dd68a68fea4ee38a266f7a63d47685213a9b3271f6ad14" exitCode=0 Feb 17 00:48:33 crc kubenswrapper[4805]: I0217 00:48:33.197622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" event={"ID":"0fe4c30a-bcb1-429d-8796-a1bacaec3988","Type":"ContainerDied","Data":"113abf0282e9261b88dd68a68fea4ee38a266f7a63d47685213a9b3271f6ad14"} Feb 17 00:48:33 crc kubenswrapper[4805]: I0217 00:48:33.201253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerStarted","Data":"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774"} Feb 17 00:48:34 crc kubenswrapper[4805]: E0217 00:48:34.091852 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.161923 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.209962 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7dpbz" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="registry-server" containerID="cri-o://96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04" gracePeriod=2 Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.863229 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.870636 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950178 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn648\" (UniqueName: \"kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648\") pod \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle\") pod \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content\") pod \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950417 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam\") pod \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw82q\" (UniqueName: \"kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q\") pod \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities\") pod \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\" (UID: \"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.950543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory\") pod \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\" (UID: \"0fe4c30a-bcb1-429d-8796-a1bacaec3988\") " Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.952852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities" (OuterVolumeSpecName: "utilities") pod "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" (UID: "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.958046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q" (OuterVolumeSpecName: "kube-api-access-sw82q") pod "0fe4c30a-bcb1-429d-8796-a1bacaec3988" (UID: "0fe4c30a-bcb1-429d-8796-a1bacaec3988"). InnerVolumeSpecName "kube-api-access-sw82q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.965877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0fe4c30a-bcb1-429d-8796-a1bacaec3988" (UID: "0fe4c30a-bcb1-429d-8796-a1bacaec3988"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.977254 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648" (OuterVolumeSpecName: "kube-api-access-fn648") pod "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" (UID: "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a"). InnerVolumeSpecName "kube-api-access-fn648". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.984929 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory" (OuterVolumeSpecName: "inventory") pod "0fe4c30a-bcb1-429d-8796-a1bacaec3988" (UID: "0fe4c30a-bcb1-429d-8796-a1bacaec3988"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:48:34 crc kubenswrapper[4805]: I0217 00:48:34.988270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0fe4c30a-bcb1-429d-8796-a1bacaec3988" (UID: "0fe4c30a-bcb1-429d-8796-a1bacaec3988"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.008501 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" (UID: "e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052443 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052479 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn648\" (UniqueName: \"kubernetes.io/projected/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-kube-api-access-fn648\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052498 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052511 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052523 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw82q\" (UniqueName: \"kubernetes.io/projected/0fe4c30a-bcb1-429d-8796-a1bacaec3988-kube-api-access-sw82q\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052536 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fe4c30a-bcb1-429d-8796-a1bacaec3988-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.052548 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.219861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" event={"ID":"0fe4c30a-bcb1-429d-8796-a1bacaec3988","Type":"ContainerDied","Data":"4197e10887ff122c3be0e3c7de4698b8d6de3179ad96adf2be13d332329b722b"} Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.219911 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4197e10887ff122c3be0e3c7de4698b8d6de3179ad96adf2be13d332329b722b" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.219996 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.228454 4805 generic.go:334] "Generic (PLEG): container finished" podID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerID="96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04" exitCode=0 Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.228495 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerDied","Data":"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04"} Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.228553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dpbz" event={"ID":"e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a","Type":"ContainerDied","Data":"64c0ccd9ce87697deb53c62af92726f95ed6737bc8cc79a21c1804ef4c734220"} Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.228570 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dpbz" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.228593 4805 scope.go:117] "RemoveContainer" containerID="96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.256047 4805 scope.go:117] "RemoveContainer" containerID="77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.297817 4805 scope.go:117] "RemoveContainer" containerID="fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.306632 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6"] Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.307311 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="extract-content" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.307536 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="extract-content" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.307613 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="registry-server" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.307630 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="registry-server" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.307661 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe4c30a-bcb1-429d-8796-a1bacaec3988" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.307686 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe4c30a-bcb1-429d-8796-a1bacaec3988" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.307716 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="extract-utilities" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.307728 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="extract-utilities" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.308013 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe4c30a-bcb1-429d-8796-a1bacaec3988" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.308043 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" containerName="registry-server" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.309095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.312265 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.312830 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.312993 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.314480 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.327284 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.334969 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7dpbz"] Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.343878 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6"] Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.350641 4805 scope.go:117] "RemoveContainer" containerID="96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.353586 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04\": container with ID starting with 96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04 not found: ID does not exist" containerID="96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.353652 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04"} err="failed to get container status \"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04\": rpc error: code = NotFound desc = could not find container \"96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04\": container with ID starting with 96dfba63d2ea903ec7c4e7a60fdfcbda28f91d8ad42a19687d7ee3afe6355d04 not found: ID does not exist" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.353681 4805 scope.go:117] "RemoveContainer" containerID="77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.355986 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a\": container with ID starting with 77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a not found: ID does not exist" containerID="77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.356060 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a"} err="failed to get container status \"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a\": rpc error: code = NotFound desc = could not find container \"77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a\": container with ID starting with 77d757e10e41f7725c7f9329a9ebc7b68a6f31992488884b29b4ef5f17a7a27a not found: ID does not exist" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.356093 4805 scope.go:117] "RemoveContainer" containerID="fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.358104 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537\": container with ID starting with fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537 not found: ID does not exist" containerID="fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.358150 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537"} err="failed to get container status \"fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537\": rpc error: code = NotFound desc = could not find container \"fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537\": container with ID starting with fa462845b29b778ef55f032f80f4ddc582c254e38d9485db06018f1adb0ea537 not found: ID does not exist" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.462520 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.462770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.462983 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zfg\" (UniqueName: \"kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.463078 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.564858 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.565030 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zfg\" (UniqueName: \"kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.565087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.565149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.570861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.571403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.571529 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.597850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zfg\" (UniqueName: \"kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.743712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:48:35 crc kubenswrapper[4805]: I0217 00:48:35.785559 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:48:35 crc kubenswrapper[4805]: E0217 00:48:35.786030 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:48:36 crc kubenswrapper[4805]: I0217 00:48:36.414906 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6"] Feb 17 00:48:36 crc kubenswrapper[4805]: I0217 00:48:36.805469 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a" path="/var/lib/kubelet/pods/e8ca9a49-25f7-45d3-94d4-dfe9b9f49b8a/volumes" Feb 17 00:48:37 crc kubenswrapper[4805]: I0217 00:48:37.267351 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" event={"ID":"0093521f-7e1e-421e-a1ce-bf4e5612ba77","Type":"ContainerStarted","Data":"43384c7fa6acabe56ae6ae89c1ecaad02d04be7e3cbd9caf00d19bb0fcb905d4"} Feb 17 00:48:37 crc kubenswrapper[4805]: I0217 00:48:37.267391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" event={"ID":"0093521f-7e1e-421e-a1ce-bf4e5612ba77","Type":"ContainerStarted","Data":"a17248697d91073be63892d64a790fbfa52e398f681e6fef5ec1297024cde5fd"} Feb 17 00:48:37 crc kubenswrapper[4805]: I0217 00:48:37.310199 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" podStartSLOduration=1.8897948549999999 podStartE2EDuration="2.310169376s" podCreationTimestamp="2026-02-17 00:48:35 +0000 UTC" firstStartedPulling="2026-02-17 00:48:36.415011298 +0000 UTC m=+1542.430820706" lastFinishedPulling="2026-02-17 00:48:36.835385829 +0000 UTC m=+1542.851195227" observedRunningTime="2026-02-17 00:48:37.289069218 +0000 UTC m=+1543.304878616" watchObservedRunningTime="2026-02-17 00:48:37.310169376 +0000 UTC m=+1543.325978814" Feb 17 00:48:38 crc kubenswrapper[4805]: I0217 00:48:38.279683 4805 generic.go:334] "Generic (PLEG): container finished" podID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerID="bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774" exitCode=0 Feb 17 00:48:38 crc kubenswrapper[4805]: I0217 00:48:38.279870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerDied","Data":"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774"} Feb 17 00:48:38 crc kubenswrapper[4805]: E0217 00:48:38.785817 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:48:39 crc kubenswrapper[4805]: I0217 00:48:39.293190 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerStarted","Data":"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5"} Feb 17 00:48:39 crc kubenswrapper[4805]: I0217 00:48:39.331807 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bzgbz" podStartSLOduration=2.774345876 podStartE2EDuration="9.331783439s" podCreationTimestamp="2026-02-17 00:48:30 +0000 UTC" firstStartedPulling="2026-02-17 00:48:32.183965042 +0000 UTC m=+1538.199774460" lastFinishedPulling="2026-02-17 00:48:38.741402625 +0000 UTC m=+1544.757212023" observedRunningTime="2026-02-17 00:48:39.319787805 +0000 UTC m=+1545.335597213" watchObservedRunningTime="2026-02-17 00:48:39.331783439 +0000 UTC m=+1545.347592837" Feb 17 00:48:39 crc kubenswrapper[4805]: I0217 00:48:39.414565 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 00:48:40 crc kubenswrapper[4805]: I0217 00:48:40.541027 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:40 crc kubenswrapper[4805]: I0217 00:48:40.541422 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:40 crc kubenswrapper[4805]: I0217 00:48:40.807655 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 00:48:41 crc kubenswrapper[4805]: I0217 00:48:41.599297 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bzgbz" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="registry-server" probeResult="failure" output=< Feb 17 00:48:41 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 00:48:41 crc kubenswrapper[4805]: > Feb 17 00:48:41 crc kubenswrapper[4805]: E0217 00:48:41.786251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:48:44 crc kubenswrapper[4805]: E0217 00:48:44.362216 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:48 crc kubenswrapper[4805]: I0217 00:48:48.785620 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:48:48 crc kubenswrapper[4805]: E0217 00:48:48.786509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:48:50 crc kubenswrapper[4805]: I0217 00:48:50.609739 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:50 crc kubenswrapper[4805]: I0217 00:48:50.678952 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:50 crc kubenswrapper[4805]: I0217 00:48:50.854282 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:52 crc kubenswrapper[4805]: I0217 00:48:52.446668 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bzgbz" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="registry-server" containerID="cri-o://b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5" gracePeriod=2 Feb 17 00:48:52 crc kubenswrapper[4805]: E0217 00:48:52.788703 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:48:52 crc kubenswrapper[4805]: I0217 00:48:52.992510 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.165413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4rk2\" (UniqueName: \"kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2\") pod \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.165843 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities\") pod \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.166566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities" (OuterVolumeSpecName: "utilities") pod "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" (UID: "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.166645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content\") pod \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\" (UID: \"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7\") " Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.169573 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.173174 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2" (OuterVolumeSpecName: "kube-api-access-r4rk2") pod "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" (UID: "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7"). InnerVolumeSpecName "kube-api-access-r4rk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.271948 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4rk2\" (UniqueName: \"kubernetes.io/projected/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-kube-api-access-r4rk2\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.327457 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" (UID: "a36c2d5c-712c-4680-a1a0-f1140ee6fbc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.374110 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.458194 4805 generic.go:334] "Generic (PLEG): container finished" podID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerID="b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5" exitCode=0 Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.458254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerDied","Data":"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5"} Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.458305 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzgbz" event={"ID":"a36c2d5c-712c-4680-a1a0-f1140ee6fbc7","Type":"ContainerDied","Data":"6d4117732f223f3b3264eba19dc67a86f1b8b8ea78dd3473a070ac7158f615a9"} Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.458350 4805 scope.go:117] "RemoveContainer" containerID="b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.458276 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzgbz" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.496975 4805 scope.go:117] "RemoveContainer" containerID="bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.497071 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.505840 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bzgbz"] Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.527814 4805 scope.go:117] "RemoveContainer" containerID="36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.586563 4805 scope.go:117] "RemoveContainer" containerID="b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5" Feb 17 00:48:53 crc kubenswrapper[4805]: E0217 00:48:53.586964 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5\": container with ID starting with b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5 not found: ID does not exist" containerID="b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.587013 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5"} err="failed to get container status \"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5\": rpc error: code = NotFound desc = could not find container \"b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5\": container with ID starting with b5676b9180e9e0888e983b982a17799022d5a5da19b1702269cf0c926bb907f5 not found: ID does not exist" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.587040 4805 scope.go:117] "RemoveContainer" containerID="bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774" Feb 17 00:48:53 crc kubenswrapper[4805]: E0217 00:48:53.587354 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774\": container with ID starting with bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774 not found: ID does not exist" containerID="bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.587375 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774"} err="failed to get container status \"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774\": rpc error: code = NotFound desc = could not find container \"bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774\": container with ID starting with bc9daea41965e61da55cda98ec20fb84c944a797fede7d849927cb0d8eefd774 not found: ID does not exist" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.587387 4805 scope.go:117] "RemoveContainer" containerID="36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f" Feb 17 00:48:53 crc kubenswrapper[4805]: E0217 00:48:53.587652 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f\": container with ID starting with 36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f not found: ID does not exist" containerID="36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f" Feb 17 00:48:53 crc kubenswrapper[4805]: I0217 00:48:53.587669 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f"} err="failed to get container status \"36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f\": rpc error: code = NotFound desc = could not find container \"36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f\": container with ID starting with 36dd0d1088e94533c87fa1a6130ade4852def3614ab1bd592807ccb233bb851f not found: ID does not exist" Feb 17 00:48:53 crc kubenswrapper[4805]: E0217 00:48:53.785598 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:48:54 crc kubenswrapper[4805]: E0217 00:48:54.626392 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4dc4b6a_3026_47b5_be70_2bbb24fbf5c1.slice\": RecentStats: unable to find data in memory cache]" Feb 17 00:48:54 crc kubenswrapper[4805]: I0217 00:48:54.812854 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" path="/var/lib/kubelet/pods/a36c2d5c-712c-4680-a1a0-f1140ee6fbc7/volumes" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.261099 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:48:56 crc kubenswrapper[4805]: E0217 00:48:56.262572 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="extract-utilities" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.262674 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="extract-utilities" Feb 17 00:48:56 crc kubenswrapper[4805]: E0217 00:48:56.262799 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="registry-server" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.262874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="registry-server" Feb 17 00:48:56 crc kubenswrapper[4805]: E0217 00:48:56.262974 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="extract-content" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.263046 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="extract-content" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.263411 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a36c2d5c-712c-4680-a1a0-f1140ee6fbc7" containerName="registry-server" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.265827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.302236 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.434669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.434735 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726bj\" (UniqueName: \"kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.434795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.536872 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.536925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726bj\" (UniqueName: \"kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.536979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.537868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.537882 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.577737 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726bj\" (UniqueName: \"kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj\") pod \"redhat-marketplace-p4b9d\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:56 crc kubenswrapper[4805]: I0217 00:48:56.595610 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:48:57 crc kubenswrapper[4805]: I0217 00:48:57.117227 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:48:57 crc kubenswrapper[4805]: I0217 00:48:57.506521 4805 generic.go:334] "Generic (PLEG): container finished" podID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerID="4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a" exitCode=0 Feb 17 00:48:57 crc kubenswrapper[4805]: I0217 00:48:57.506599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerDied","Data":"4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a"} Feb 17 00:48:57 crc kubenswrapper[4805]: I0217 00:48:57.506776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerStarted","Data":"3a22505c3dfa8345430b36ef1f2ccb876fefc61a42857232ca47e1c29e663d48"} Feb 17 00:48:58 crc kubenswrapper[4805]: I0217 00:48:58.519664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerStarted","Data":"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427"} Feb 17 00:48:59 crc kubenswrapper[4805]: I0217 00:48:59.531448 4805 generic.go:334] "Generic (PLEG): container finished" podID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerID="0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427" exitCode=0 Feb 17 00:48:59 crc kubenswrapper[4805]: I0217 00:48:59.531493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerDied","Data":"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427"} Feb 17 00:49:00 crc kubenswrapper[4805]: I0217 00:49:00.555189 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerStarted","Data":"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb"} Feb 17 00:49:00 crc kubenswrapper[4805]: I0217 00:49:00.582291 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4b9d" podStartSLOduration=2.122919922 podStartE2EDuration="4.58227114s" podCreationTimestamp="2026-02-17 00:48:56 +0000 UTC" firstStartedPulling="2026-02-17 00:48:57.508381335 +0000 UTC m=+1563.524190753" lastFinishedPulling="2026-02-17 00:48:59.967732583 +0000 UTC m=+1565.983541971" observedRunningTime="2026-02-17 00:49:00.575908023 +0000 UTC m=+1566.591717431" watchObservedRunningTime="2026-02-17 00:49:00.58227114 +0000 UTC m=+1566.598080548" Feb 17 00:49:01 crc kubenswrapper[4805]: I0217 00:49:01.785022 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:49:01 crc kubenswrapper[4805]: E0217 00:49:01.785509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:49:03 crc kubenswrapper[4805]: E0217 00:49:03.924631 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:49:03 crc kubenswrapper[4805]: E0217 00:49:03.925041 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:49:03 crc kubenswrapper[4805]: E0217 00:49:03.925180 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:49:03 crc kubenswrapper[4805]: E0217 00:49:03.926560 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:49:06 crc kubenswrapper[4805]: I0217 00:49:06.596516 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:06 crc kubenswrapper[4805]: I0217 00:49:06.596877 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:06 crc kubenswrapper[4805]: I0217 00:49:06.670831 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:06 crc kubenswrapper[4805]: I0217 00:49:06.718107 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:06 crc kubenswrapper[4805]: I0217 00:49:06.912463 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:49:08 crc kubenswrapper[4805]: I0217 00:49:08.651972 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4b9d" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="registry-server" containerID="cri-o://b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb" gracePeriod=2 Feb 17 00:49:08 crc kubenswrapper[4805]: E0217 00:49:08.932522 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:49:08 crc kubenswrapper[4805]: E0217 00:49:08.932800 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:49:08 crc kubenswrapper[4805]: E0217 00:49:08.932921 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:49:08 crc kubenswrapper[4805]: E0217 00:49:08.934837 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.270652 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.370860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities\") pod \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.370941 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content\") pod \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.371005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726bj\" (UniqueName: \"kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj\") pod \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\" (UID: \"c50edcec-aeeb-49b6-812d-0f12c5f9f340\") " Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.371756 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities" (OuterVolumeSpecName: "utilities") pod "c50edcec-aeeb-49b6-812d-0f12c5f9f340" (UID: "c50edcec-aeeb-49b6-812d-0f12c5f9f340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.381195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj" (OuterVolumeSpecName: "kube-api-access-726bj") pod "c50edcec-aeeb-49b6-812d-0f12c5f9f340" (UID: "c50edcec-aeeb-49b6-812d-0f12c5f9f340"). InnerVolumeSpecName "kube-api-access-726bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.403450 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c50edcec-aeeb-49b6-812d-0f12c5f9f340" (UID: "c50edcec-aeeb-49b6-812d-0f12c5f9f340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.473240 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.473485 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726bj\" (UniqueName: \"kubernetes.io/projected/c50edcec-aeeb-49b6-812d-0f12c5f9f340-kube-api-access-726bj\") on node \"crc\" DevicePath \"\"" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.473561 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50edcec-aeeb-49b6-812d-0f12c5f9f340-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.669012 4805 generic.go:334] "Generic (PLEG): container finished" podID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerID="b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb" exitCode=0 Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.669090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerDied","Data":"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb"} Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.669141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"c50edcec-aeeb-49b6-812d-0f12c5f9f340","Type":"ContainerDied","Data":"3a22505c3dfa8345430b36ef1f2ccb876fefc61a42857232ca47e1c29e663d48"} Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.669181 4805 scope.go:117] "RemoveContainer" containerID="b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.669512 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.726937 4805 scope.go:117] "RemoveContainer" containerID="0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.734186 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.746454 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.771731 4805 scope.go:117] "RemoveContainer" containerID="4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.833520 4805 scope.go:117] "RemoveContainer" containerID="b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb" Feb 17 00:49:09 crc kubenswrapper[4805]: E0217 00:49:09.833947 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb\": container with ID starting with b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb not found: ID does not exist" containerID="b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.833979 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb"} err="failed to get container status \"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb\": rpc error: code = NotFound desc = could not find container \"b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb\": container with ID starting with b27dbce5395094728fcdf16d2d3599df0feb813cb25b85deafc21c414019cceb not found: ID does not exist" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.834012 4805 scope.go:117] "RemoveContainer" containerID="0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427" Feb 17 00:49:09 crc kubenswrapper[4805]: E0217 00:49:09.834524 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427\": container with ID starting with 0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427 not found: ID does not exist" containerID="0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.834544 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427"} err="failed to get container status \"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427\": rpc error: code = NotFound desc = could not find container \"0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427\": container with ID starting with 0555edd496e63c49cdff856995208a4ef8999982f2215ee52eabdf1897cd2427 not found: ID does not exist" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.834565 4805 scope.go:117] "RemoveContainer" containerID="4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a" Feb 17 00:49:09 crc kubenswrapper[4805]: E0217 00:49:09.834830 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a\": container with ID starting with 4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a not found: ID does not exist" containerID="4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a" Feb 17 00:49:09 crc kubenswrapper[4805]: I0217 00:49:09.834851 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a"} err="failed to get container status \"4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a\": rpc error: code = NotFound desc = could not find container \"4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a\": container with ID starting with 4bc594d1203bd417328d422531ef1c76fa3060653182460ed644640ebb4e097a not found: ID does not exist" Feb 17 00:49:10 crc kubenswrapper[4805]: I0217 00:49:10.800548 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" path="/var/lib/kubelet/pods/c50edcec-aeeb-49b6-812d-0f12c5f9f340/volumes" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.133966 4805 scope.go:117] "RemoveContainer" containerID="8f7b5996bb3baf66a48bffeafa69160fb68716c1e0a3995629306da5bb81fb20" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.176657 4805 scope.go:117] "RemoveContainer" containerID="66da351eff6651b8820d8d164de99f239e76e6f6f571c6175de14c07eafa1e3f" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.213191 4805 scope.go:117] "RemoveContainer" containerID="7ac35363fda0f2081de48eb146c51da54e39e0d7dc0b2f422289f5f3444be076" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.277257 4805 scope.go:117] "RemoveContainer" containerID="c49e505056e2634c85d207535b633323c26c243219beafef205c79ae22b1e532" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.327471 4805 scope.go:117] "RemoveContainer" containerID="fe832a2d02c28d84252e7c1edfde0c46a465cc48d68c8bcae31e0c2c15dbd45d" Feb 17 00:49:15 crc kubenswrapper[4805]: I0217 00:49:15.378371 4805 scope.go:117] "RemoveContainer" containerID="a967e96b4fc9fb26d0d2c908cb214ed3caac2bda655ee5c46048d0e504a60b3a" Feb 17 00:49:16 crc kubenswrapper[4805]: I0217 00:49:16.784848 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:49:16 crc kubenswrapper[4805]: E0217 00:49:16.785179 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:49:18 crc kubenswrapper[4805]: E0217 00:49:18.787445 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:49:20 crc kubenswrapper[4805]: E0217 00:49:20.787691 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:49:31 crc kubenswrapper[4805]: I0217 00:49:31.785221 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:49:31 crc kubenswrapper[4805]: E0217 00:49:31.785930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:49:31 crc kubenswrapper[4805]: E0217 00:49:31.787251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:49:33 crc kubenswrapper[4805]: E0217 00:49:33.787049 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:49:43 crc kubenswrapper[4805]: E0217 00:49:43.787217 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:49:45 crc kubenswrapper[4805]: I0217 00:49:45.784742 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:49:45 crc kubenswrapper[4805]: E0217 00:49:45.785165 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:49:46 crc kubenswrapper[4805]: E0217 00:49:46.787185 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:49:55 crc kubenswrapper[4805]: E0217 00:49:55.788640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:49:57 crc kubenswrapper[4805]: E0217 00:49:57.789248 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:49:58 crc kubenswrapper[4805]: I0217 00:49:58.785249 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:49:58 crc kubenswrapper[4805]: E0217 00:49:58.785710 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:50:10 crc kubenswrapper[4805]: E0217 00:50:10.790496 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:50:11 crc kubenswrapper[4805]: I0217 00:50:11.785530 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:50:11 crc kubenswrapper[4805]: E0217 00:50:11.787292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:50:11 crc kubenswrapper[4805]: E0217 00:50:11.788079 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:50:15 crc kubenswrapper[4805]: I0217 00:50:15.647719 4805 scope.go:117] "RemoveContainer" containerID="a4b6d3b9acf976a3b824591e2e345591c3ae1f9b703ce1320ac7a1b395415efa" Feb 17 00:50:15 crc kubenswrapper[4805]: I0217 00:50:15.678305 4805 scope.go:117] "RemoveContainer" containerID="cfcccfd5b15c29633353d469e79a73b1b9c56503e92879a49713d378e7117a44" Feb 17 00:50:22 crc kubenswrapper[4805]: E0217 00:50:22.788563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:50:23 crc kubenswrapper[4805]: E0217 00:50:23.787616 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:50:25 crc kubenswrapper[4805]: I0217 00:50:25.785913 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:50:25 crc kubenswrapper[4805]: E0217 00:50:25.786758 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.880996 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.881950 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.882176 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.883712 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.910590 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.910673 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.910861 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:50:37 crc kubenswrapper[4805]: E0217 00:50:37.912124 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:50:39 crc kubenswrapper[4805]: I0217 00:50:39.784054 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:50:39 crc kubenswrapper[4805]: E0217 00:50:39.784542 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:50:48 crc kubenswrapper[4805]: E0217 00:50:48.787959 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:50:51 crc kubenswrapper[4805]: E0217 00:50:51.788064 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:50:52 crc kubenswrapper[4805]: I0217 00:50:52.785056 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:50:52 crc kubenswrapper[4805]: E0217 00:50:52.785544 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:51:00 crc kubenswrapper[4805]: E0217 00:51:00.791285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:51:03 crc kubenswrapper[4805]: I0217 00:51:03.784638 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:51:03 crc kubenswrapper[4805]: E0217 00:51:03.785859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:51:06 crc kubenswrapper[4805]: E0217 00:51:06.788009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:51:12 crc kubenswrapper[4805]: E0217 00:51:12.790104 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:51:17 crc kubenswrapper[4805]: E0217 00:51:17.788499 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:51:18 crc kubenswrapper[4805]: I0217 00:51:18.785954 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:51:18 crc kubenswrapper[4805]: E0217 00:51:18.786490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:51:26 crc kubenswrapper[4805]: E0217 00:51:26.787534 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:51:29 crc kubenswrapper[4805]: E0217 00:51:29.790570 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:51:31 crc kubenswrapper[4805]: I0217 00:51:31.627501 4805 generic.go:334] "Generic (PLEG): container finished" podID="0093521f-7e1e-421e-a1ce-bf4e5612ba77" containerID="43384c7fa6acabe56ae6ae89c1ecaad02d04be7e3cbd9caf00d19bb0fcb905d4" exitCode=0 Feb 17 00:51:31 crc kubenswrapper[4805]: I0217 00:51:31.627569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" event={"ID":"0093521f-7e1e-421e-a1ce-bf4e5612ba77","Type":"ContainerDied","Data":"43384c7fa6acabe56ae6ae89c1ecaad02d04be7e3cbd9caf00d19bb0fcb905d4"} Feb 17 00:51:31 crc kubenswrapper[4805]: I0217 00:51:31.785218 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:51:31 crc kubenswrapper[4805]: E0217 00:51:31.785839 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.152803 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.237158 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2zfg\" (UniqueName: \"kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg\") pod \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.237308 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam\") pod \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.237583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle\") pod \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.237640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory\") pod \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\" (UID: \"0093521f-7e1e-421e-a1ce-bf4e5612ba77\") " Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.243303 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg" (OuterVolumeSpecName: "kube-api-access-w2zfg") pod "0093521f-7e1e-421e-a1ce-bf4e5612ba77" (UID: "0093521f-7e1e-421e-a1ce-bf4e5612ba77"). InnerVolumeSpecName "kube-api-access-w2zfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.250516 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0093521f-7e1e-421e-a1ce-bf4e5612ba77" (UID: "0093521f-7e1e-421e-a1ce-bf4e5612ba77"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.268742 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory" (OuterVolumeSpecName: "inventory") pod "0093521f-7e1e-421e-a1ce-bf4e5612ba77" (UID: "0093521f-7e1e-421e-a1ce-bf4e5612ba77"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.278157 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0093521f-7e1e-421e-a1ce-bf4e5612ba77" (UID: "0093521f-7e1e-421e-a1ce-bf4e5612ba77"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.340024 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2zfg\" (UniqueName: \"kubernetes.io/projected/0093521f-7e1e-421e-a1ce-bf4e5612ba77-kube-api-access-w2zfg\") on node \"crc\" DevicePath \"\"" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.340160 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.340228 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.340291 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0093521f-7e1e-421e-a1ce-bf4e5612ba77-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.661864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" event={"ID":"0093521f-7e1e-421e-a1ce-bf4e5612ba77","Type":"ContainerDied","Data":"a17248697d91073be63892d64a790fbfa52e398f681e6fef5ec1297024cde5fd"} Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.662231 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a17248697d91073be63892d64a790fbfa52e398f681e6fef5ec1297024cde5fd" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.662190 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.788863 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646"] Feb 17 00:51:33 crc kubenswrapper[4805]: E0217 00:51:33.789426 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="extract-content" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789444 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="extract-content" Feb 17 00:51:33 crc kubenswrapper[4805]: E0217 00:51:33.789473 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0093521f-7e1e-421e-a1ce-bf4e5612ba77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789482 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0093521f-7e1e-421e-a1ce-bf4e5612ba77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 00:51:33 crc kubenswrapper[4805]: E0217 00:51:33.789506 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="registry-server" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789514 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="registry-server" Feb 17 00:51:33 crc kubenswrapper[4805]: E0217 00:51:33.789535 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="extract-utilities" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789542 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="extract-utilities" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789788 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50edcec-aeeb-49b6-812d-0f12c5f9f340" containerName="registry-server" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.789812 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0093521f-7e1e-421e-a1ce-bf4e5612ba77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.790713 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.793135 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.794579 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.795243 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.795799 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.807637 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646"] Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.852005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.852056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8497\" (UniqueName: \"kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.852419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.956646 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.956693 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8497\" (UniqueName: \"kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.956779 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.962560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.965285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:33 crc kubenswrapper[4805]: I0217 00:51:33.975344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8497\" (UniqueName: \"kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-r4646\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:34 crc kubenswrapper[4805]: I0217 00:51:34.160811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:51:34 crc kubenswrapper[4805]: I0217 00:51:34.759049 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646"] Feb 17 00:51:35 crc kubenswrapper[4805]: I0217 00:51:35.686142 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" event={"ID":"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a","Type":"ContainerStarted","Data":"e7b59c285da9339cea9919b1c5356c66cec477b64e54af31a4bf2f8c94813998"} Feb 17 00:51:35 crc kubenswrapper[4805]: I0217 00:51:35.686912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" event={"ID":"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a","Type":"ContainerStarted","Data":"e5d26721b63ecff47e10c9566971d3d5116c76752c6de5037e8850280d2f90f3"} Feb 17 00:51:35 crc kubenswrapper[4805]: I0217 00:51:35.703126 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" podStartSLOduration=2.274227184 podStartE2EDuration="2.703103772s" podCreationTimestamp="2026-02-17 00:51:33 +0000 UTC" firstStartedPulling="2026-02-17 00:51:34.759181787 +0000 UTC m=+1720.774991185" lastFinishedPulling="2026-02-17 00:51:35.188058375 +0000 UTC m=+1721.203867773" observedRunningTime="2026-02-17 00:51:35.701020114 +0000 UTC m=+1721.716829532" watchObservedRunningTime="2026-02-17 00:51:35.703103772 +0000 UTC m=+1721.718913180" Feb 17 00:51:39 crc kubenswrapper[4805]: E0217 00:51:39.787247 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:51:41 crc kubenswrapper[4805]: E0217 00:51:41.789277 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:51:45 crc kubenswrapper[4805]: I0217 00:51:45.785189 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:51:45 crc kubenswrapper[4805]: E0217 00:51:45.786123 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:51:51 crc kubenswrapper[4805]: E0217 00:51:51.788096 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:51:55 crc kubenswrapper[4805]: E0217 00:51:55.788343 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:51:56 crc kubenswrapper[4805]: I0217 00:51:56.785436 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:51:56 crc kubenswrapper[4805]: E0217 00:51:56.785821 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:52:02 crc kubenswrapper[4805]: E0217 00:52:02.787900 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:52:08 crc kubenswrapper[4805]: E0217 00:52:08.789559 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:52:10 crc kubenswrapper[4805]: I0217 00:52:10.785029 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:52:10 crc kubenswrapper[4805]: E0217 00:52:10.785676 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:52:15 crc kubenswrapper[4805]: E0217 00:52:15.789345 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:52:20 crc kubenswrapper[4805]: E0217 00:52:20.788589 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:52:25 crc kubenswrapper[4805]: I0217 00:52:25.784476 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:52:25 crc kubenswrapper[4805]: E0217 00:52:25.785377 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:52:28 crc kubenswrapper[4805]: E0217 00:52:28.788229 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:52:32 crc kubenswrapper[4805]: E0217 00:52:32.788979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:52:34 crc kubenswrapper[4805]: I0217 00:52:34.067541 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dspfd"] Feb 17 00:52:34 crc kubenswrapper[4805]: I0217 00:52:34.084479 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dspfd"] Feb 17 00:52:34 crc kubenswrapper[4805]: I0217 00:52:34.836762 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef27e931-15d7-45e2-ae8d-cd31c9fffdd5" path="/var/lib/kubelet/pods/ef27e931-15d7-45e2-ae8d-cd31c9fffdd5/volumes" Feb 17 00:52:36 crc kubenswrapper[4805]: I0217 00:52:36.786044 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:52:36 crc kubenswrapper[4805]: E0217 00:52:36.787121 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:52:38 crc kubenswrapper[4805]: I0217 00:52:38.049715 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7a5e-account-create-update-mcmp6"] Feb 17 00:52:38 crc kubenswrapper[4805]: I0217 00:52:38.070308 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7a5e-account-create-update-mcmp6"] Feb 17 00:52:38 crc kubenswrapper[4805]: I0217 00:52:38.809494 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3669f3-fc93-4d03-a114-3de9f6385fc5" path="/var/lib/kubelet/pods/7b3669f3-fc93-4d03-a114-3de9f6385fc5/volumes" Feb 17 00:52:39 crc kubenswrapper[4805]: E0217 00:52:39.790833 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.050544 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-fsxmm"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.070559 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fq4tj"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.087686 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2zdbb"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.103402 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-3992-account-create-update-h2vts"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.116273 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3c43-account-create-update-wqp2f"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.126723 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fq4tj"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.135470 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-fsxmm"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.145716 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2zdbb"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.161435 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3c43-account-create-update-wqp2f"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.174777 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-3992-account-create-update-h2vts"] Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.813544 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06dfed54-f183-46cc-abd4-089a231b2201" path="/var/lib/kubelet/pods/06dfed54-f183-46cc-abd4-089a231b2201/volumes" Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.814437 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db2630f-effd-4730-a324-bbfe90d75a8a" path="/var/lib/kubelet/pods/1db2630f-effd-4730-a324-bbfe90d75a8a/volumes" Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.815215 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844af17c-95de-4afa-8d20-f00cf5195840" path="/var/lib/kubelet/pods/844af17c-95de-4afa-8d20-f00cf5195840/volumes" Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.817092 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e10a8c-b19f-4558-acef-2027c30614bf" path="/var/lib/kubelet/pods/b9e10a8c-b19f-4558-acef-2027c30614bf/volumes" Feb 17 00:52:42 crc kubenswrapper[4805]: I0217 00:52:42.818448 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0711d3-a423-437c-9de6-9c0be097d3bd" path="/var/lib/kubelet/pods/dd0711d3-a423-437c-9de6-9c0be097d3bd/volumes" Feb 17 00:52:43 crc kubenswrapper[4805]: I0217 00:52:43.041635 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7c85-account-create-update-xt2cz"] Feb 17 00:52:43 crc kubenswrapper[4805]: I0217 00:52:43.058183 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7c85-account-create-update-xt2cz"] Feb 17 00:52:44 crc kubenswrapper[4805]: I0217 00:52:44.194405 4805 generic.go:334] "Generic (PLEG): container finished" podID="f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" containerID="e7b59c285da9339cea9919b1c5356c66cec477b64e54af31a4bf2f8c94813998" exitCode=0 Feb 17 00:52:44 crc kubenswrapper[4805]: I0217 00:52:44.194775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" event={"ID":"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a","Type":"ContainerDied","Data":"e7b59c285da9339cea9919b1c5356c66cec477b64e54af31a4bf2f8c94813998"} Feb 17 00:52:44 crc kubenswrapper[4805]: I0217 00:52:44.812626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58b7ac7-8a62-4f29-bb0a-7915e01e87ba" path="/var/lib/kubelet/pods/d58b7ac7-8a62-4f29-bb0a-7915e01e87ba/volumes" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.707925 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.767597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8497\" (UniqueName: \"kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497\") pod \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.767687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory\") pod \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.767827 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam\") pod \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\" (UID: \"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a\") " Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.778124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497" (OuterVolumeSpecName: "kube-api-access-c8497") pod "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" (UID: "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a"). InnerVolumeSpecName "kube-api-access-c8497". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.806603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" (UID: "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.817417 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory" (OuterVolumeSpecName: "inventory") pod "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" (UID: "f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.871136 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8497\" (UniqueName: \"kubernetes.io/projected/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-kube-api-access-c8497\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.871203 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:45 crc kubenswrapper[4805]: I0217 00:52:45.871223 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.233631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" event={"ID":"f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a","Type":"ContainerDied","Data":"e5d26721b63ecff47e10c9566971d3d5116c76752c6de5037e8850280d2f90f3"} Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.234016 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5d26721b63ecff47e10c9566971d3d5116c76752c6de5037e8850280d2f90f3" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.233730 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-r4646" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.348023 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7"] Feb 17 00:52:46 crc kubenswrapper[4805]: E0217 00:52:46.348575 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.348597 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.348893 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.349810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.353473 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.353611 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.354044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.354485 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.360201 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7"] Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.381705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrfs\" (UniqueName: \"kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.381820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.381947 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.483427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.483542 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrfs\" (UniqueName: \"kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.483620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.488567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.490819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.498456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrfs\" (UniqueName: \"kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:46 crc kubenswrapper[4805]: I0217 00:52:46.682837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:47 crc kubenswrapper[4805]: I0217 00:52:47.296318 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7"] Feb 17 00:52:47 crc kubenswrapper[4805]: E0217 00:52:47.791802 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:52:48 crc kubenswrapper[4805]: I0217 00:52:48.274812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" event={"ID":"5f257638-7e99-4278-9d14-395b4c2a89ac","Type":"ContainerStarted","Data":"a1b8c57a5bb3821509bf29e1e8cc0c335b2fb528014c434882896b4ab4a82bcd"} Feb 17 00:52:48 crc kubenswrapper[4805]: I0217 00:52:48.275276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" event={"ID":"5f257638-7e99-4278-9d14-395b4c2a89ac","Type":"ContainerStarted","Data":"1a27bdaca8d2f04116b889fe2a6d9c5a5efa9a970a8bfc82bae680b119be2def"} Feb 17 00:52:48 crc kubenswrapper[4805]: I0217 00:52:48.297945 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" podStartSLOduration=1.8201432770000001 podStartE2EDuration="2.297920252s" podCreationTimestamp="2026-02-17 00:52:46 +0000 UTC" firstStartedPulling="2026-02-17 00:52:47.315873323 +0000 UTC m=+1793.331682731" lastFinishedPulling="2026-02-17 00:52:47.793650278 +0000 UTC m=+1793.809459706" observedRunningTime="2026-02-17 00:52:48.295318809 +0000 UTC m=+1794.311128237" watchObservedRunningTime="2026-02-17 00:52:48.297920252 +0000 UTC m=+1794.313729680" Feb 17 00:52:49 crc kubenswrapper[4805]: I0217 00:52:49.045650 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-3d7f-account-create-update-hd6xl"] Feb 17 00:52:49 crc kubenswrapper[4805]: I0217 00:52:49.067509 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hw88l"] Feb 17 00:52:49 crc kubenswrapper[4805]: I0217 00:52:49.081401 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-3d7f-account-create-update-hd6xl"] Feb 17 00:52:49 crc kubenswrapper[4805]: I0217 00:52:49.094618 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-hw88l"] Feb 17 00:52:50 crc kubenswrapper[4805]: I0217 00:52:50.807219 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab61f86-d58e-4874-99f0-bd197d671827" path="/var/lib/kubelet/pods/3ab61f86-d58e-4874-99f0-bd197d671827/volumes" Feb 17 00:52:50 crc kubenswrapper[4805]: I0217 00:52:50.808950 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6ab18e-af1c-44c2-9d84-cb294ed04fdb" path="/var/lib/kubelet/pods/4a6ab18e-af1c-44c2-9d84-cb294ed04fdb/volumes" Feb 17 00:52:51 crc kubenswrapper[4805]: I0217 00:52:51.786014 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:52:51 crc kubenswrapper[4805]: E0217 00:52:51.786957 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:52:51 crc kubenswrapper[4805]: E0217 00:52:51.786991 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:52:53 crc kubenswrapper[4805]: I0217 00:52:53.372291 4805 generic.go:334] "Generic (PLEG): container finished" podID="5f257638-7e99-4278-9d14-395b4c2a89ac" containerID="a1b8c57a5bb3821509bf29e1e8cc0c335b2fb528014c434882896b4ab4a82bcd" exitCode=0 Feb 17 00:52:53 crc kubenswrapper[4805]: I0217 00:52:53.372423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" event={"ID":"5f257638-7e99-4278-9d14-395b4c2a89ac","Type":"ContainerDied","Data":"a1b8c57a5bb3821509bf29e1e8cc0c335b2fb528014c434882896b4ab4a82bcd"} Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.035351 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.132430 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam\") pod \"5f257638-7e99-4278-9d14-395b4c2a89ac\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.132603 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scrfs\" (UniqueName: \"kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs\") pod \"5f257638-7e99-4278-9d14-395b4c2a89ac\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.132787 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory\") pod \"5f257638-7e99-4278-9d14-395b4c2a89ac\" (UID: \"5f257638-7e99-4278-9d14-395b4c2a89ac\") " Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.138683 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs" (OuterVolumeSpecName: "kube-api-access-scrfs") pod "5f257638-7e99-4278-9d14-395b4c2a89ac" (UID: "5f257638-7e99-4278-9d14-395b4c2a89ac"). InnerVolumeSpecName "kube-api-access-scrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.160921 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5f257638-7e99-4278-9d14-395b4c2a89ac" (UID: "5f257638-7e99-4278-9d14-395b4c2a89ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.161483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory" (OuterVolumeSpecName: "inventory") pod "5f257638-7e99-4278-9d14-395b4c2a89ac" (UID: "5f257638-7e99-4278-9d14-395b4c2a89ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.235409 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.235452 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scrfs\" (UniqueName: \"kubernetes.io/projected/5f257638-7e99-4278-9d14-395b4c2a89ac-kube-api-access-scrfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.235463 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f257638-7e99-4278-9d14-395b4c2a89ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.399312 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" event={"ID":"5f257638-7e99-4278-9d14-395b4c2a89ac","Type":"ContainerDied","Data":"1a27bdaca8d2f04116b889fe2a6d9c5a5efa9a970a8bfc82bae680b119be2def"} Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.399843 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a27bdaca8d2f04116b889fe2a6d9c5a5efa9a970a8bfc82bae680b119be2def" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.399420 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.498115 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn"] Feb 17 00:52:55 crc kubenswrapper[4805]: E0217 00:52:55.498834 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f257638-7e99-4278-9d14-395b4c2a89ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.498938 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f257638-7e99-4278-9d14-395b4c2a89ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.499274 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f257638-7e99-4278-9d14-395b4c2a89ac" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.500289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.509245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.509488 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.509844 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.510371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.532124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn"] Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.643781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.644060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2nx\" (UniqueName: \"kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.644167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.747099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.747166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2nx\" (UniqueName: \"kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.747212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.752891 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.753321 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.775573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2nx\" (UniqueName: \"kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hb8nn\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:55 crc kubenswrapper[4805]: I0217 00:52:55.837924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:52:56 crc kubenswrapper[4805]: I0217 00:52:56.538258 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn"] Feb 17 00:52:57 crc kubenswrapper[4805]: I0217 00:52:57.448732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" event={"ID":"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d","Type":"ContainerStarted","Data":"8601cdf0a6dc2f5c0efbf97b5cc4c23d91595ea99910dd76394ea76f5da0a665"} Feb 17 00:52:57 crc kubenswrapper[4805]: I0217 00:52:57.449357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" event={"ID":"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d","Type":"ContainerStarted","Data":"457e9cf0cca5351c673cba8f7daf50ad5d079c4cfd8da46941a0ea64cce3dc5b"} Feb 17 00:53:00 crc kubenswrapper[4805]: E0217 00:53:00.790820 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:53:05 crc kubenswrapper[4805]: E0217 00:53:05.788250 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:53:06 crc kubenswrapper[4805]: I0217 00:53:06.786507 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:53:07 crc kubenswrapper[4805]: I0217 00:53:07.588437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d"} Feb 17 00:53:07 crc kubenswrapper[4805]: I0217 00:53:07.618123 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" podStartSLOduration=12.190285463 podStartE2EDuration="12.618090652s" podCreationTimestamp="2026-02-17 00:52:55 +0000 UTC" firstStartedPulling="2026-02-17 00:52:56.541574436 +0000 UTC m=+1802.557383834" lastFinishedPulling="2026-02-17 00:52:56.969379585 +0000 UTC m=+1802.985189023" observedRunningTime="2026-02-17 00:52:57.479816126 +0000 UTC m=+1803.495625564" watchObservedRunningTime="2026-02-17 00:53:07.618090652 +0000 UTC m=+1813.633900090" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.101742 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-750c-account-create-update-n6gdl"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.118289 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-wxcc6"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.131606 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fp7zz"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.141963 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0869-account-create-update-dwqwt"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.153654 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-45ed-account-create-update-dndhk"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.165889 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-750c-account-create-update-n6gdl"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.175981 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fp7zz"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.186004 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0869-account-create-update-dwqwt"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.195658 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-wxcc6"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.206671 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-45ed-account-create-update-dndhk"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.798966 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0686c805-0a62-46a4-ae40-f3831191c403" path="/var/lib/kubelet/pods/0686c805-0a62-46a4-ae40-f3831191c403/volumes" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.799527 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd7278c-a746-4195-9d5e-035f100862db" path="/var/lib/kubelet/pods/5cd7278c-a746-4195-9d5e-035f100862db/volumes" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.814277 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7e01a2e-86e0-449a-96d8-37396b137271" path="/var/lib/kubelet/pods/b7e01a2e-86e0-449a-96d8-37396b137271/volumes" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.814966 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce100ad8-844c-4b1d-8c16-6acce86b75d2" path="/var/lib/kubelet/pods/ce100ad8-844c-4b1d-8c16-6acce86b75d2/volumes" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.815722 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde176ec-50b1-4a8a-8b8d-a652fc977aa5" path="/var/lib/kubelet/pods/fde176ec-50b1-4a8a-8b8d-a652fc977aa5/volumes" Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.844041 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8e7a-account-create-update-92gnd"] Feb 17 00:53:12 crc kubenswrapper[4805]: I0217 00:53:12.859937 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8e7a-account-create-update-92gnd"] Feb 17 00:53:13 crc kubenswrapper[4805]: I0217 00:53:13.042967 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-nqsq7"] Feb 17 00:53:13 crc kubenswrapper[4805]: I0217 00:53:13.055090 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d6ckd"] Feb 17 00:53:13 crc kubenswrapper[4805]: I0217 00:53:13.067684 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-nqsq7"] Feb 17 00:53:13 crc kubenswrapper[4805]: I0217 00:53:13.092468 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d6ckd"] Feb 17 00:53:14 crc kubenswrapper[4805]: E0217 00:53:14.830423 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:53:14 crc kubenswrapper[4805]: I0217 00:53:14.835227 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c" path="/var/lib/kubelet/pods/466975d4-ec9f-4c39-ab8c-dcccf7bd9f8c/volumes" Feb 17 00:53:14 crc kubenswrapper[4805]: I0217 00:53:14.836151 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca1b1ba7-b284-4f58-baff-840133925a82" path="/var/lib/kubelet/pods/ca1b1ba7-b284-4f58-baff-840133925a82/volumes" Feb 17 00:53:14 crc kubenswrapper[4805]: I0217 00:53:14.842085 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f2fd03-808b-40ca-bea0-ac46f4f8770d" path="/var/lib/kubelet/pods/d2f2fd03-808b-40ca-bea0-ac46f4f8770d/volumes" Feb 17 00:53:15 crc kubenswrapper[4805]: I0217 00:53:15.862072 4805 scope.go:117] "RemoveContainer" containerID="ec49b0f8d358830df6e4c2847b0efbe4ca099ea1ca72b312be86054dc6d91659" Feb 17 00:53:15 crc kubenswrapper[4805]: I0217 00:53:15.902865 4805 scope.go:117] "RemoveContainer" containerID="8c0eb357f3d63907d5c804547af52b884c30783735ab631981ffec900d1f59c9" Feb 17 00:53:15 crc kubenswrapper[4805]: I0217 00:53:15.941496 4805 scope.go:117] "RemoveContainer" containerID="cb346269dd69d1d6bc92c676a23be687da8394fa4b800aa695a6b990aedcb1fa" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.019857 4805 scope.go:117] "RemoveContainer" containerID="2ba0459af916ba902fe0a984f5fa92aa763d8cef98fcc68d34591ef22554358a" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.068875 4805 scope.go:117] "RemoveContainer" containerID="a3bf4eaf6845bb8bc7a63f36847355f1129d1065934ae27afdd6fad8ce4d6068" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.123590 4805 scope.go:117] "RemoveContainer" containerID="7857cdd60814a3b1196dacd096d320c926e89bf3ae0358b634b2e3bbf5f7b5c0" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.157730 4805 scope.go:117] "RemoveContainer" containerID="94f88c087d451b909e3b5f712ea7d45c1990589e85bab58f20ae21d31efff3c0" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.218463 4805 scope.go:117] "RemoveContainer" containerID="15543308b45f895f0751984f2868f9e2b06966082c85e44f88df5f3a1caf1251" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.260250 4805 scope.go:117] "RemoveContainer" containerID="2b410673a8d29b0f411b0b1f4320ff4063117ab09c42d227c985e1750a9a2fca" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.335645 4805 scope.go:117] "RemoveContainer" containerID="e926f9924473eff08fe262e6df894ff328407d82072b25773d16d9854397d722" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.360829 4805 scope.go:117] "RemoveContainer" containerID="3d14019d44eabd6cc556a55056d47250f87ef381b09dbcd96137383765874190" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.389902 4805 scope.go:117] "RemoveContainer" containerID="94aa67b1a9e958378c25e37f711d5aab1882d87a3f13f2dfd363f6fff074092f" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.416429 4805 scope.go:117] "RemoveContainer" containerID="8342baaad8a8d6197c0bfd4880d2722d1db08bcba63aec37af87020e83ead2cd" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.454067 4805 scope.go:117] "RemoveContainer" containerID="7abf8d1a29a20160aeb535c545d2f851a92ce0898aabfda0b32945deda7f54d6" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.479087 4805 scope.go:117] "RemoveContainer" containerID="823eba5b85f3337f7940a009fc5f4ae29680716f2485253d4bfd4b840c130beb" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.509402 4805 scope.go:117] "RemoveContainer" containerID="3f393c15ea7df46e5dd1dd67ae46d4d4aa5cc4764d4dc85f73d42cd9762691d4" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.542459 4805 scope.go:117] "RemoveContainer" containerID="1aa5f947a47892ee328e68ce29c4eb3d5620d3fbcab7206b52ebc47e9958b0f6" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.568297 4805 scope.go:117] "RemoveContainer" containerID="52ac472b28927effa17b5fea79bf9b8c8bcda95ef87a093a7fc3b3584c6ecdd8" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.595321 4805 scope.go:117] "RemoveContainer" containerID="2f87850bcce697a354ef3d598968f2d62cd4b4bdb1231f1b9766613e9df3ec35" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.621133 4805 scope.go:117] "RemoveContainer" containerID="b0b07e59cd8e57e5153ef49f88b1206ca554ca102a92edddef8ac03d861d0374" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.643185 4805 scope.go:117] "RemoveContainer" containerID="a6f7a3f060d7f022b4ea6e2832811cf1a102d5d7ce1bb1696396084c77178a15" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.677898 4805 scope.go:117] "RemoveContainer" containerID="ec0c2871e6afe66d3ea6a3a07ef450509c823f0e90339b32720995708b39b0e5" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.708414 4805 scope.go:117] "RemoveContainer" containerID="f0dd96784ef0a1eaf651dec69ad241cff4adf415f8146fae6953bf2c6658eea7" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.756129 4805 scope.go:117] "RemoveContainer" containerID="fe44a1bb50097d121e463e8be014f953906481706b8cdf598739890a12af7cbe" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.797012 4805 scope.go:117] "RemoveContainer" containerID="5989ac843b1186af3b554fbdd6da3eee35a1dd814f649d566427f624c72bf250" Feb 17 00:53:16 crc kubenswrapper[4805]: I0217 00:53:16.821828 4805 scope.go:117] "RemoveContainer" containerID="a8bacb37f646426c210fd86904c602639990f2a74f587708204479d94952154d" Feb 17 00:53:18 crc kubenswrapper[4805]: I0217 00:53:18.047685 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-d6gtj"] Feb 17 00:53:18 crc kubenswrapper[4805]: I0217 00:53:18.069737 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-d6gtj"] Feb 17 00:53:18 crc kubenswrapper[4805]: I0217 00:53:18.790418 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:53:18 crc kubenswrapper[4805]: I0217 00:53:18.823717 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca71e40f-60ca-4021-974f-0057bf0963cf" path="/var/lib/kubelet/pods/ca71e40f-60ca-4021-974f-0057bf0963cf/volumes" Feb 17 00:53:18 crc kubenswrapper[4805]: E0217 00:53:18.914072 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:53:18 crc kubenswrapper[4805]: E0217 00:53:18.914138 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:53:18 crc kubenswrapper[4805]: E0217 00:53:18.914308 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:53:18 crc kubenswrapper[4805]: E0217 00:53:18.915706 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:53:25 crc kubenswrapper[4805]: I0217 00:53:25.039096 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fkb8r"] Feb 17 00:53:25 crc kubenswrapper[4805]: I0217 00:53:25.051242 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fkb8r"] Feb 17 00:53:26 crc kubenswrapper[4805]: I0217 00:53:26.804896 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a384429f-1585-4ead-bbf6-ea810c568c88" path="/var/lib/kubelet/pods/a384429f-1585-4ead-bbf6-ea810c568c88/volumes" Feb 17 00:53:27 crc kubenswrapper[4805]: E0217 00:53:27.917173 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:53:27 crc kubenswrapper[4805]: E0217 00:53:27.917557 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:53:27 crc kubenswrapper[4805]: E0217 00:53:27.917748 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:53:27 crc kubenswrapper[4805]: E0217 00:53:27.919005 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:53:33 crc kubenswrapper[4805]: E0217 00:53:33.789117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:53:38 crc kubenswrapper[4805]: I0217 00:53:38.025550 4805 generic.go:334] "Generic (PLEG): container finished" podID="a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" containerID="8601cdf0a6dc2f5c0efbf97b5cc4c23d91595ea99910dd76394ea76f5da0a665" exitCode=0 Feb 17 00:53:38 crc kubenswrapper[4805]: I0217 00:53:38.025705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" event={"ID":"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d","Type":"ContainerDied","Data":"8601cdf0a6dc2f5c0efbf97b5cc4c23d91595ea99910dd76394ea76f5da0a665"} Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.659423 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.736312 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g2nx\" (UniqueName: \"kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx\") pod \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.736518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory\") pod \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.736709 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam\") pod \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\" (UID: \"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d\") " Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.753718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx" (OuterVolumeSpecName: "kube-api-access-9g2nx") pod "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" (UID: "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d"). InnerVolumeSpecName "kube-api-access-9g2nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.775080 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory" (OuterVolumeSpecName: "inventory") pod "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" (UID: "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.788638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" (UID: "a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.839050 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.839089 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g2nx\" (UniqueName: \"kubernetes.io/projected/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-kube-api-access-9g2nx\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:39 crc kubenswrapper[4805]: I0217 00:53:39.839104 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.053983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" event={"ID":"a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d","Type":"ContainerDied","Data":"457e9cf0cca5351c673cba8f7daf50ad5d079c4cfd8da46941a0ea64cce3dc5b"} Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.054304 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457e9cf0cca5351c673cba8f7daf50ad5d079c4cfd8da46941a0ea64cce3dc5b" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.054086 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hb8nn" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.168971 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf"] Feb 17 00:53:40 crc kubenswrapper[4805]: E0217 00:53:40.169813 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.169846 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.170299 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.171888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.174068 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.174998 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.175172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.179202 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.195081 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf"] Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.249451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.249727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96vb\" (UniqueName: \"kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.249780 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.352405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.352680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96vb\" (UniqueName: \"kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.352722 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.358527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.358811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.379474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96vb\" (UniqueName: \"kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:40 crc kubenswrapper[4805]: I0217 00:53:40.498427 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:41 crc kubenswrapper[4805]: I0217 00:53:41.174719 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf"] Feb 17 00:53:41 crc kubenswrapper[4805]: E0217 00:53:41.787523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:53:42 crc kubenswrapper[4805]: I0217 00:53:42.092102 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" event={"ID":"22c1452d-5db0-4327-b0ad-59b577d64796","Type":"ContainerStarted","Data":"765e90f54a96f0685dfa8f94faec10a2a70efae1a9c5079550ba5c1a4f6a410e"} Feb 17 00:53:43 crc kubenswrapper[4805]: I0217 00:53:43.110426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" event={"ID":"22c1452d-5db0-4327-b0ad-59b577d64796","Type":"ContainerStarted","Data":"4efce22cf768fcebbfae55cddb7ec7cb5bc588348171815c2083d27611397efc"} Feb 17 00:53:43 crc kubenswrapper[4805]: I0217 00:53:43.131198 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" podStartSLOduration=2.212717095 podStartE2EDuration="3.131179244s" podCreationTimestamp="2026-02-17 00:53:40 +0000 UTC" firstStartedPulling="2026-02-17 00:53:41.191949868 +0000 UTC m=+1847.207759256" lastFinishedPulling="2026-02-17 00:53:42.110411967 +0000 UTC m=+1848.126221405" observedRunningTime="2026-02-17 00:53:43.126267245 +0000 UTC m=+1849.142076683" watchObservedRunningTime="2026-02-17 00:53:43.131179244 +0000 UTC m=+1849.146988642" Feb 17 00:53:45 crc kubenswrapper[4805]: E0217 00:53:45.789987 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.066551 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5ltpl"] Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.078358 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-j7v5m"] Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.091919 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5ltpl"] Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.102587 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-j7v5m"] Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.182552 4805 generic.go:334] "Generic (PLEG): container finished" podID="22c1452d-5db0-4327-b0ad-59b577d64796" containerID="4efce22cf768fcebbfae55cddb7ec7cb5bc588348171815c2083d27611397efc" exitCode=0 Feb 17 00:53:47 crc kubenswrapper[4805]: I0217 00:53:47.182634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" event={"ID":"22c1452d-5db0-4327-b0ad-59b577d64796","Type":"ContainerDied","Data":"4efce22cf768fcebbfae55cddb7ec7cb5bc588348171815c2083d27611397efc"} Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.772975 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.823929 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1395fd63-af68-412a-9a95-f4ffde9dfe1c" path="/var/lib/kubelet/pods/1395fd63-af68-412a-9a95-f4ffde9dfe1c/volumes" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.825278 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38464d88-9f3b-485b-872a-98ed2ea8e3be" path="/var/lib/kubelet/pods/38464d88-9f3b-485b-872a-98ed2ea8e3be/volumes" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.868618 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam\") pod \"22c1452d-5db0-4327-b0ad-59b577d64796\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.868930 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x96vb\" (UniqueName: \"kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb\") pod \"22c1452d-5db0-4327-b0ad-59b577d64796\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.869039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory\") pod \"22c1452d-5db0-4327-b0ad-59b577d64796\" (UID: \"22c1452d-5db0-4327-b0ad-59b577d64796\") " Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.881112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb" (OuterVolumeSpecName: "kube-api-access-x96vb") pod "22c1452d-5db0-4327-b0ad-59b577d64796" (UID: "22c1452d-5db0-4327-b0ad-59b577d64796"). InnerVolumeSpecName "kube-api-access-x96vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.906487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "22c1452d-5db0-4327-b0ad-59b577d64796" (UID: "22c1452d-5db0-4327-b0ad-59b577d64796"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.921391 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory" (OuterVolumeSpecName: "inventory") pod "22c1452d-5db0-4327-b0ad-59b577d64796" (UID: "22c1452d-5db0-4327-b0ad-59b577d64796"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.972455 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x96vb\" (UniqueName: \"kubernetes.io/projected/22c1452d-5db0-4327-b0ad-59b577d64796-kube-api-access-x96vb\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.972497 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:48 crc kubenswrapper[4805]: I0217 00:53:48.972510 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/22c1452d-5db0-4327-b0ad-59b577d64796-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.215148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" event={"ID":"22c1452d-5db0-4327-b0ad-59b577d64796","Type":"ContainerDied","Data":"765e90f54a96f0685dfa8f94faec10a2a70efae1a9c5079550ba5c1a4f6a410e"} Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.215209 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765e90f54a96f0685dfa8f94faec10a2a70efae1a9c5079550ba5c1a4f6a410e" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.215261 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.310956 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm"] Feb 17 00:53:49 crc kubenswrapper[4805]: E0217 00:53:49.311585 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22c1452d-5db0-4327-b0ad-59b577d64796" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.311616 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22c1452d-5db0-4327-b0ad-59b577d64796" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.312018 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="22c1452d-5db0-4327-b0ad-59b577d64796" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.313168 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.318812 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.319266 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.320679 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.322181 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.326713 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm"] Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.380922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.381381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl5cs\" (UniqueName: \"kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.381595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.483549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.483859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl5cs\" (UniqueName: \"kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.484009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.489759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.490054 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.514659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl5cs\" (UniqueName: \"kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:49 crc kubenswrapper[4805]: I0217 00:53:49.634848 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:53:50 crc kubenswrapper[4805]: I0217 00:53:50.362214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm"] Feb 17 00:53:51 crc kubenswrapper[4805]: I0217 00:53:51.237114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" event={"ID":"574d6680-e445-454e-b172-e677f2339cd2","Type":"ContainerStarted","Data":"322a23401803baba660c2adc104821c8d8c62eebbdf150e5cf5c431c82f81807"} Feb 17 00:53:51 crc kubenswrapper[4805]: I0217 00:53:51.237469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" event={"ID":"574d6680-e445-454e-b172-e677f2339cd2","Type":"ContainerStarted","Data":"11408b1112c7316facec41ca010a171b30e1f558506c03b3755d2c161c46053b"} Feb 17 00:53:51 crc kubenswrapper[4805]: I0217 00:53:51.252993 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" podStartSLOduration=1.84891246 podStartE2EDuration="2.252972397s" podCreationTimestamp="2026-02-17 00:53:49 +0000 UTC" firstStartedPulling="2026-02-17 00:53:50.35295128 +0000 UTC m=+1856.368760678" lastFinishedPulling="2026-02-17 00:53:50.757011207 +0000 UTC m=+1856.772820615" observedRunningTime="2026-02-17 00:53:51.251937238 +0000 UTC m=+1857.267746646" watchObservedRunningTime="2026-02-17 00:53:51.252972397 +0000 UTC m=+1857.268781795" Feb 17 00:53:56 crc kubenswrapper[4805]: E0217 00:53:56.786789 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:00 crc kubenswrapper[4805]: E0217 00:54:00.788114 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:54:01 crc kubenswrapper[4805]: I0217 00:54:01.059867 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qb577"] Feb 17 00:54:01 crc kubenswrapper[4805]: I0217 00:54:01.078082 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qb577"] Feb 17 00:54:01 crc kubenswrapper[4805]: I0217 00:54:01.092880 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-64sw8"] Feb 17 00:54:01 crc kubenswrapper[4805]: I0217 00:54:01.104543 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-64sw8"] Feb 17 00:54:02 crc kubenswrapper[4805]: I0217 00:54:02.799169 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ddd3866-a515-49a8-8b48-aa6981c7536e" path="/var/lib/kubelet/pods/9ddd3866-a515-49a8-8b48-aa6981c7536e/volumes" Feb 17 00:54:02 crc kubenswrapper[4805]: I0217 00:54:02.800577 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac778b90-57e0-42ae-b661-8d7418eb00c4" path="/var/lib/kubelet/pods/ac778b90-57e0-42ae-b661-8d7418eb00c4/volumes" Feb 17 00:54:07 crc kubenswrapper[4805]: E0217 00:54:07.787875 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:11 crc kubenswrapper[4805]: I0217 00:54:11.041901 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fbvsz"] Feb 17 00:54:11 crc kubenswrapper[4805]: I0217 00:54:11.052548 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fbvsz"] Feb 17 00:54:12 crc kubenswrapper[4805]: E0217 00:54:12.788417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:54:12 crc kubenswrapper[4805]: I0217 00:54:12.802552 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d265cd4b-2604-4a2e-902a-d31a861c2439" path="/var/lib/kubelet/pods/d265cd4b-2604-4a2e-902a-d31a861c2439/volumes" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.047762 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-r8kk4"] Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.066867 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-r8kk4"] Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.334249 4805 scope.go:117] "RemoveContainer" containerID="7a5150ef659fb0b7a550733ece273f6d466389a79a2dd970f196ac0271ddb0c5" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.364859 4805 scope.go:117] "RemoveContainer" containerID="c8831532b27ea7ca1512d47b11d4e89cfd685557c4d240ea27a352504c5cd58a" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.427721 4805 scope.go:117] "RemoveContainer" containerID="89b28ea93899aa41bad44f2b915dce5f20e3f498b809ed9b33e107bfe115f4f1" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.484033 4805 scope.go:117] "RemoveContainer" containerID="98a643290c20c6631f5d15ced493a6bb73441d364a72041d6f42422843ed387f" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.524971 4805 scope.go:117] "RemoveContainer" containerID="f1aa371fef229498e2ed4d986a649838b18988ac18758598d43bc5b4bdc06fa8" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.584459 4805 scope.go:117] "RemoveContainer" containerID="79176a8e77d9ea3f57f6a0804238aef2e7a723e97179966c5193e640f33c2e0c" Feb 17 00:54:17 crc kubenswrapper[4805]: I0217 00:54:17.638166 4805 scope.go:117] "RemoveContainer" containerID="36680b14b252dc43ab1db9e9556ba6abcf9347b16cbcea4a985d74bca748cc78" Feb 17 00:54:18 crc kubenswrapper[4805]: I0217 00:54:18.798760 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89462a0-ccda-47cf-93e9-b8cd763c3b08" path="/var/lib/kubelet/pods/e89462a0-ccda-47cf-93e9-b8cd763c3b08/volumes" Feb 17 00:54:21 crc kubenswrapper[4805]: E0217 00:54:21.787115 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:25 crc kubenswrapper[4805]: E0217 00:54:25.787319 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:54:36 crc kubenswrapper[4805]: E0217 00:54:36.787355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:37 crc kubenswrapper[4805]: E0217 00:54:37.787039 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:54:44 crc kubenswrapper[4805]: I0217 00:54:44.914365 4805 generic.go:334] "Generic (PLEG): container finished" podID="574d6680-e445-454e-b172-e677f2339cd2" containerID="322a23401803baba660c2adc104821c8d8c62eebbdf150e5cf5c431c82f81807" exitCode=0 Feb 17 00:54:44 crc kubenswrapper[4805]: I0217 00:54:44.914388 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" event={"ID":"574d6680-e445-454e-b172-e677f2339cd2","Type":"ContainerDied","Data":"322a23401803baba660c2adc104821c8d8c62eebbdf150e5cf5c431c82f81807"} Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.427679 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.483559 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory\") pod \"574d6680-e445-454e-b172-e677f2339cd2\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.483682 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl5cs\" (UniqueName: \"kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs\") pod \"574d6680-e445-454e-b172-e677f2339cd2\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.493534 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs" (OuterVolumeSpecName: "kube-api-access-gl5cs") pod "574d6680-e445-454e-b172-e677f2339cd2" (UID: "574d6680-e445-454e-b172-e677f2339cd2"). InnerVolumeSpecName "kube-api-access-gl5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.555006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory" (OuterVolumeSpecName: "inventory") pod "574d6680-e445-454e-b172-e677f2339cd2" (UID: "574d6680-e445-454e-b172-e677f2339cd2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.589723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam\") pod \"574d6680-e445-454e-b172-e677f2339cd2\" (UID: \"574d6680-e445-454e-b172-e677f2339cd2\") " Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.590635 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.590665 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl5cs\" (UniqueName: \"kubernetes.io/projected/574d6680-e445-454e-b172-e677f2339cd2-kube-api-access-gl5cs\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.626364 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "574d6680-e445-454e-b172-e677f2339cd2" (UID: "574d6680-e445-454e-b172-e677f2339cd2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.692363 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/574d6680-e445-454e-b172-e677f2339cd2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.941018 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" event={"ID":"574d6680-e445-454e-b172-e677f2339cd2","Type":"ContainerDied","Data":"11408b1112c7316facec41ca010a171b30e1f558506c03b3755d2c161c46053b"} Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.941047 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm" Feb 17 00:54:46 crc kubenswrapper[4805]: I0217 00:54:46.941103 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11408b1112c7316facec41ca010a171b30e1f558506c03b3755d2c161c46053b" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.056272 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4hz5"] Feb 17 00:54:47 crc kubenswrapper[4805]: E0217 00:54:47.056848 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574d6680-e445-454e-b172-e677f2339cd2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.056873 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="574d6680-e445-454e-b172-e677f2339cd2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.057186 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="574d6680-e445-454e-b172-e677f2339cd2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.058176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.060616 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.060728 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.061039 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.063021 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.099464 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4hz5"] Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.206677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.206744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmw4j\" (UniqueName: \"kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.206825 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.308883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.308948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmw4j\" (UniqueName: \"kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.309038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.313809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.323175 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.339007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmw4j\" (UniqueName: \"kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j\") pod \"ssh-known-hosts-edpm-deployment-j4hz5\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: I0217 00:54:47.379418 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:47 crc kubenswrapper[4805]: E0217 00:54:47.785923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:48 crc kubenswrapper[4805]: I0217 00:54:48.045801 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4hz5"] Feb 17 00:54:48 crc kubenswrapper[4805]: W0217 00:54:48.063233 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaab539c_cd12_429d_b5ec_a957900aa0c2.slice/crio-182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389 WatchSource:0}: Error finding container 182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389: Status 404 returned error can't find the container with id 182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389 Feb 17 00:54:48 crc kubenswrapper[4805]: I0217 00:54:48.965047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" event={"ID":"daab539c-cd12-429d-b5ec-a957900aa0c2","Type":"ContainerStarted","Data":"dd5250d8140c3755c4668d16808ae48f813608c84facbb6cc0926f2b8c2aa6f1"} Feb 17 00:54:48 crc kubenswrapper[4805]: I0217 00:54:48.965367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" event={"ID":"daab539c-cd12-429d-b5ec-a957900aa0c2","Type":"ContainerStarted","Data":"182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389"} Feb 17 00:54:48 crc kubenswrapper[4805]: I0217 00:54:48.992479 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" podStartSLOduration=1.542184867 podStartE2EDuration="1.992462893s" podCreationTimestamp="2026-02-17 00:54:47 +0000 UTC" firstStartedPulling="2026-02-17 00:54:48.06719186 +0000 UTC m=+1914.083001278" lastFinishedPulling="2026-02-17 00:54:48.517469866 +0000 UTC m=+1914.533279304" observedRunningTime="2026-02-17 00:54:48.991932748 +0000 UTC m=+1915.007742146" watchObservedRunningTime="2026-02-17 00:54:48.992462893 +0000 UTC m=+1915.008272291" Feb 17 00:54:52 crc kubenswrapper[4805]: E0217 00:54:52.788078 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:54:56 crc kubenswrapper[4805]: I0217 00:54:56.037618 4805 generic.go:334] "Generic (PLEG): container finished" podID="daab539c-cd12-429d-b5ec-a957900aa0c2" containerID="dd5250d8140c3755c4668d16808ae48f813608c84facbb6cc0926f2b8c2aa6f1" exitCode=0 Feb 17 00:54:56 crc kubenswrapper[4805]: I0217 00:54:56.037668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" event={"ID":"daab539c-cd12-429d-b5ec-a957900aa0c2","Type":"ContainerDied","Data":"dd5250d8140c3755c4668d16808ae48f813608c84facbb6cc0926f2b8c2aa6f1"} Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.533694 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.685443 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmw4j\" (UniqueName: \"kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j\") pod \"daab539c-cd12-429d-b5ec-a957900aa0c2\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.685576 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam\") pod \"daab539c-cd12-429d-b5ec-a957900aa0c2\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.685817 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0\") pod \"daab539c-cd12-429d-b5ec-a957900aa0c2\" (UID: \"daab539c-cd12-429d-b5ec-a957900aa0c2\") " Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.692165 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j" (OuterVolumeSpecName: "kube-api-access-hmw4j") pod "daab539c-cd12-429d-b5ec-a957900aa0c2" (UID: "daab539c-cd12-429d-b5ec-a957900aa0c2"). InnerVolumeSpecName "kube-api-access-hmw4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.736296 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "daab539c-cd12-429d-b5ec-a957900aa0c2" (UID: "daab539c-cd12-429d-b5ec-a957900aa0c2"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.740982 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "daab539c-cd12-429d-b5ec-a957900aa0c2" (UID: "daab539c-cd12-429d-b5ec-a957900aa0c2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.788945 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.788995 4805 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/daab539c-cd12-429d-b5ec-a957900aa0c2-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:57 crc kubenswrapper[4805]: I0217 00:54:57.789015 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmw4j\" (UniqueName: \"kubernetes.io/projected/daab539c-cd12-429d-b5ec-a957900aa0c2-kube-api-access-hmw4j\") on node \"crc\" DevicePath \"\"" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.066744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" event={"ID":"daab539c-cd12-429d-b5ec-a957900aa0c2","Type":"ContainerDied","Data":"182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389"} Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.066795 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182b6b496e2a7a5c97630b1feecf896a285af1531bf5c029ecc00927fb5f3389" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.066836 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4hz5" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.166667 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h"] Feb 17 00:54:58 crc kubenswrapper[4805]: E0217 00:54:58.167192 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daab539c-cd12-429d-b5ec-a957900aa0c2" containerName="ssh-known-hosts-edpm-deployment" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.167213 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="daab539c-cd12-429d-b5ec-a957900aa0c2" containerName="ssh-known-hosts-edpm-deployment" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.167500 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="daab539c-cd12-429d-b5ec-a957900aa0c2" containerName="ssh-known-hosts-edpm-deployment" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.168430 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.170425 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.170631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.171122 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.171847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.189013 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h"] Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.301366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.301457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz72t\" (UniqueName: \"kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.301512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.402936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz72t\" (UniqueName: \"kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.403224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.403489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.410563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.410754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.435920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz72t\" (UniqueName: \"kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mv28h\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: I0217 00:54:58.492966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:54:58 crc kubenswrapper[4805]: E0217 00:54:58.787438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:54:59 crc kubenswrapper[4805]: I0217 00:54:59.261948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h"] Feb 17 00:54:59 crc kubenswrapper[4805]: W0217 00:54:59.268737 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfa04663_d25d_40ee_a669_097d415e754e.slice/crio-8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1 WatchSource:0}: Error finding container 8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1: Status 404 returned error can't find the container with id 8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1 Feb 17 00:55:00 crc kubenswrapper[4805]: I0217 00:55:00.098109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" event={"ID":"dfa04663-d25d-40ee-a669-097d415e754e","Type":"ContainerStarted","Data":"e2bbac46973e0c18e8faba83627896c63502c7865ddb1ae7144948d74c6bcad8"} Feb 17 00:55:00 crc kubenswrapper[4805]: I0217 00:55:00.098632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" event={"ID":"dfa04663-d25d-40ee-a669-097d415e754e","Type":"ContainerStarted","Data":"8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1"} Feb 17 00:55:00 crc kubenswrapper[4805]: I0217 00:55:00.127660 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" podStartSLOduration=1.6610598140000001 podStartE2EDuration="2.127640232s" podCreationTimestamp="2026-02-17 00:54:58 +0000 UTC" firstStartedPulling="2026-02-17 00:54:59.271610681 +0000 UTC m=+1925.287420089" lastFinishedPulling="2026-02-17 00:54:59.738191069 +0000 UTC m=+1925.754000507" observedRunningTime="2026-02-17 00:55:00.118534034 +0000 UTC m=+1926.134343512" watchObservedRunningTime="2026-02-17 00:55:00.127640232 +0000 UTC m=+1926.143449640" Feb 17 00:55:05 crc kubenswrapper[4805]: E0217 00:55:05.786962 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:55:08 crc kubenswrapper[4805]: I0217 00:55:08.215026 4805 generic.go:334] "Generic (PLEG): container finished" podID="dfa04663-d25d-40ee-a669-097d415e754e" containerID="e2bbac46973e0c18e8faba83627896c63502c7865ddb1ae7144948d74c6bcad8" exitCode=0 Feb 17 00:55:08 crc kubenswrapper[4805]: I0217 00:55:08.215152 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" event={"ID":"dfa04663-d25d-40ee-a669-097d415e754e","Type":"ContainerDied","Data":"e2bbac46973e0c18e8faba83627896c63502c7865ddb1ae7144948d74c6bcad8"} Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.780678 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.891304 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam\") pod \"dfa04663-d25d-40ee-a669-097d415e754e\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.891420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory\") pod \"dfa04663-d25d-40ee-a669-097d415e754e\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.891687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz72t\" (UniqueName: \"kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t\") pod \"dfa04663-d25d-40ee-a669-097d415e754e\" (UID: \"dfa04663-d25d-40ee-a669-097d415e754e\") " Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.897204 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t" (OuterVolumeSpecName: "kube-api-access-rz72t") pod "dfa04663-d25d-40ee-a669-097d415e754e" (UID: "dfa04663-d25d-40ee-a669-097d415e754e"). InnerVolumeSpecName "kube-api-access-rz72t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.925927 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfa04663-d25d-40ee-a669-097d415e754e" (UID: "dfa04663-d25d-40ee-a669-097d415e754e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.931112 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory" (OuterVolumeSpecName: "inventory") pod "dfa04663-d25d-40ee-a669-097d415e754e" (UID: "dfa04663-d25d-40ee-a669-097d415e754e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.994748 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz72t\" (UniqueName: \"kubernetes.io/projected/dfa04663-d25d-40ee-a669-097d415e754e-kube-api-access-rz72t\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.994790 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:09 crc kubenswrapper[4805]: I0217 00:55:09.994805 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfa04663-d25d-40ee-a669-097d415e754e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.244068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" event={"ID":"dfa04663-d25d-40ee-a669-097d415e754e","Type":"ContainerDied","Data":"8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1"} Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.244328 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8dd1c61088e5225ce3fa95c7cf8804c155ee6d25fe303fdac7a840fa3307c1" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.244215 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mv28h" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.381124 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c"] Feb 17 00:55:10 crc kubenswrapper[4805]: E0217 00:55:10.381589 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa04663-d25d-40ee-a669-097d415e754e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.381609 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa04663-d25d-40ee-a669-097d415e754e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.381828 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa04663-d25d-40ee-a669-097d415e754e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.382613 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.385928 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.386191 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.388277 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.390708 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.438165 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c"] Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.503086 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.503524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.503564 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh5f\" (UniqueName: \"kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.605724 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.605772 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh5f\" (UniqueName: \"kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.605828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.609733 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.622040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.625827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh5f\" (UniqueName: \"kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: I0217 00:55:10.751490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:10 crc kubenswrapper[4805]: E0217 00:55:10.786534 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:55:11 crc kubenswrapper[4805]: I0217 00:55:11.364196 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c"] Feb 17 00:55:11 crc kubenswrapper[4805]: W0217 00:55:11.368818 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0357546_9ba3_46f6_98cd_bee9c102f671.slice/crio-2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca WatchSource:0}: Error finding container 2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca: Status 404 returned error can't find the container with id 2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca Feb 17 00:55:12 crc kubenswrapper[4805]: I0217 00:55:12.278901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" event={"ID":"a0357546-9ba3-46f6-98cd-bee9c102f671","Type":"ContainerStarted","Data":"2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca"} Feb 17 00:55:13 crc kubenswrapper[4805]: I0217 00:55:13.295629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" event={"ID":"a0357546-9ba3-46f6-98cd-bee9c102f671","Type":"ContainerStarted","Data":"866e507cc74d66858f25de737e5cefa604f23434d50d135cda96dc8a33bbf778"} Feb 17 00:55:13 crc kubenswrapper[4805]: I0217 00:55:13.335622 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" podStartSLOduration=2.483958086 podStartE2EDuration="3.335592551s" podCreationTimestamp="2026-02-17 00:55:10 +0000 UTC" firstStartedPulling="2026-02-17 00:55:11.371310455 +0000 UTC m=+1937.387119853" lastFinishedPulling="2026-02-17 00:55:12.22294492 +0000 UTC m=+1938.238754318" observedRunningTime="2026-02-17 00:55:13.327630625 +0000 UTC m=+1939.343440023" watchObservedRunningTime="2026-02-17 00:55:13.335592551 +0000 UTC m=+1939.351401989" Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.112055 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d364-account-create-update-r4pjv"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.128071 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fdg8l"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.140127 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d364-account-create-update-r4pjv"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.151451 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sflzs"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.160631 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fdg8l"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.169357 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sflzs"] Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.805167 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068c01a0-347f-401a-bac0-b0e82bb04e7d" path="/var/lib/kubelet/pods/068c01a0-347f-401a-bac0-b0e82bb04e7d/volumes" Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.806464 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d764513-224d-4ccb-acc5-49f319acaa63" path="/var/lib/kubelet/pods/7d764513-224d-4ccb-acc5-49f319acaa63/volumes" Feb 17 00:55:14 crc kubenswrapper[4805]: I0217 00:55:14.807289 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4244588-a78b-401f-be2f-9d1c4f70fc40" path="/var/lib/kubelet/pods/a4244588-a78b-401f-be2f-9d1c4f70fc40/volumes" Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.057709 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2m728"] Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.074806 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-43ce-account-create-update-l5hkp"] Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.085713 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d59a-account-create-update-sddjc"] Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.095974 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2m728"] Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.104673 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d59a-account-create-update-sddjc"] Feb 17 00:55:15 crc kubenswrapper[4805]: I0217 00:55:15.111758 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-43ce-account-create-update-l5hkp"] Feb 17 00:55:16 crc kubenswrapper[4805]: E0217 00:55:16.787261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:55:16 crc kubenswrapper[4805]: I0217 00:55:16.797748 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2828a3f7-804a-467f-aeb0-f0a2aab63c85" path="/var/lib/kubelet/pods/2828a3f7-804a-467f-aeb0-f0a2aab63c85/volumes" Feb 17 00:55:16 crc kubenswrapper[4805]: I0217 00:55:16.798474 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b" path="/var/lib/kubelet/pods/f2d4ac3b-a1b7-4e76-9ece-2b53b976e05b/volumes" Feb 17 00:55:16 crc kubenswrapper[4805]: I0217 00:55:16.799165 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc948a0e-80b8-4692-997e-7c034e6e0b26" path="/var/lib/kubelet/pods/fc948a0e-80b8-4692-997e-7c034e6e0b26/volumes" Feb 17 00:55:17 crc kubenswrapper[4805]: I0217 00:55:17.863068 4805 scope.go:117] "RemoveContainer" containerID="f8193068ea49b80a759fcc4f57663e132a889f3763ab6c888e8bcb88ccc7044a" Feb 17 00:55:17 crc kubenswrapper[4805]: I0217 00:55:17.916712 4805 scope.go:117] "RemoveContainer" containerID="2de4fb278c535f7e0e137671be608bdbfc1db2791b94a46f4c39e309374d9ee5" Feb 17 00:55:17 crc kubenswrapper[4805]: I0217 00:55:17.983233 4805 scope.go:117] "RemoveContainer" containerID="e0fd1dd8d942807fe2dfa5240e3be3bbe6fb9d94151dafd469fffeed4031f486" Feb 17 00:55:18 crc kubenswrapper[4805]: I0217 00:55:18.018529 4805 scope.go:117] "RemoveContainer" containerID="cc6debe96d1ba6f753a8fa21cb99e24b28660a8c260a191f527b69659733a9b7" Feb 17 00:55:18 crc kubenswrapper[4805]: I0217 00:55:18.061012 4805 scope.go:117] "RemoveContainer" containerID="201d67b148cd31ce445883bf4e7186640714adb6981db38625ca86574f6e3442" Feb 17 00:55:18 crc kubenswrapper[4805]: I0217 00:55:18.100930 4805 scope.go:117] "RemoveContainer" containerID="ee9588fea770657fb2ad8fb91aaf3dac6c8b272b0804d899c477ec6534290196" Feb 17 00:55:18 crc kubenswrapper[4805]: I0217 00:55:18.149850 4805 scope.go:117] "RemoveContainer" containerID="c990bf1ca91d471d573bf212cec9c762c283af21285f8df889311e8dc4430c43" Feb 17 00:55:22 crc kubenswrapper[4805]: I0217 00:55:22.400978 4805 generic.go:334] "Generic (PLEG): container finished" podID="a0357546-9ba3-46f6-98cd-bee9c102f671" containerID="866e507cc74d66858f25de737e5cefa604f23434d50d135cda96dc8a33bbf778" exitCode=0 Feb 17 00:55:22 crc kubenswrapper[4805]: I0217 00:55:22.401069 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" event={"ID":"a0357546-9ba3-46f6-98cd-bee9c102f671","Type":"ContainerDied","Data":"866e507cc74d66858f25de737e5cefa604f23434d50d135cda96dc8a33bbf778"} Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.077012 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.077306 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.908268 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.928362 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory\") pod \"a0357546-9ba3-46f6-98cd-bee9c102f671\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.928502 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnh5f\" (UniqueName: \"kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f\") pod \"a0357546-9ba3-46f6-98cd-bee9c102f671\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.928676 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam\") pod \"a0357546-9ba3-46f6-98cd-bee9c102f671\" (UID: \"a0357546-9ba3-46f6-98cd-bee9c102f671\") " Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.937675 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f" (OuterVolumeSpecName: "kube-api-access-cnh5f") pod "a0357546-9ba3-46f6-98cd-bee9c102f671" (UID: "a0357546-9ba3-46f6-98cd-bee9c102f671"). InnerVolumeSpecName "kube-api-access-cnh5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:55:23 crc kubenswrapper[4805]: I0217 00:55:23.981739 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a0357546-9ba3-46f6-98cd-bee9c102f671" (UID: "a0357546-9ba3-46f6-98cd-bee9c102f671"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.001807 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory" (OuterVolumeSpecName: "inventory") pod "a0357546-9ba3-46f6-98cd-bee9c102f671" (UID: "a0357546-9ba3-46f6-98cd-bee9c102f671"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.030467 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.030499 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0357546-9ba3-46f6-98cd-bee9c102f671-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.030511 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnh5f\" (UniqueName: \"kubernetes.io/projected/a0357546-9ba3-46f6-98cd-bee9c102f671-kube-api-access-cnh5f\") on node \"crc\" DevicePath \"\"" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.431499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" event={"ID":"a0357546-9ba3-46f6-98cd-bee9c102f671","Type":"ContainerDied","Data":"2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca"} Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.431555 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2311e9969d94120ddb3814ca4fb674f630c6c2d8543f204ea81a22acab7c01ca" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.431632 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.532283 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65"] Feb 17 00:55:24 crc kubenswrapper[4805]: E0217 00:55:24.532732 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0357546-9ba3-46f6-98cd-bee9c102f671" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.532753 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0357546-9ba3-46f6-98cd-bee9c102f671" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.532921 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0357546-9ba3-46f6-98cd-bee9c102f671" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.533641 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.535792 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.538586 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.538861 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.539067 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.539268 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.539506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.541290 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.541476 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.542972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx9nf\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543077 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543248 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543402 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.543948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.544102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.550070 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65"] Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.645946 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx9nf\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646120 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.646747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.649924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.649928 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.651003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.653894 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.654751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.654811 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.656539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.658205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.658311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.658999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.661912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.662221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.665742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx9nf\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5sq65\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:24 crc kubenswrapper[4805]: E0217 00:55:24.807639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:55:24 crc kubenswrapper[4805]: I0217 00:55:24.856914 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:55:25 crc kubenswrapper[4805]: I0217 00:55:25.528413 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65"] Feb 17 00:55:25 crc kubenswrapper[4805]: W0217 00:55:25.534448 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7077a918_ba16_4a9a_90c5_3fcf25331039.slice/crio-d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db WatchSource:0}: Error finding container d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db: Status 404 returned error can't find the container with id d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db Feb 17 00:55:26 crc kubenswrapper[4805]: I0217 00:55:26.461864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" event={"ID":"7077a918-ba16-4a9a-90c5-3fcf25331039","Type":"ContainerStarted","Data":"d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db"} Feb 17 00:55:27 crc kubenswrapper[4805]: I0217 00:55:27.478225 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" event={"ID":"7077a918-ba16-4a9a-90c5-3fcf25331039","Type":"ContainerStarted","Data":"ff73b114e06ffb4db5f1e4a9a4d4996f9b191336b491a65c920cc2407633d83b"} Feb 17 00:55:27 crc kubenswrapper[4805]: I0217 00:55:27.503702 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" podStartSLOduration=2.825147066 podStartE2EDuration="3.503684807s" podCreationTimestamp="2026-02-17 00:55:24 +0000 UTC" firstStartedPulling="2026-02-17 00:55:25.539925167 +0000 UTC m=+1951.555734565" lastFinishedPulling="2026-02-17 00:55:26.218462898 +0000 UTC m=+1952.234272306" observedRunningTime="2026-02-17 00:55:27.501081194 +0000 UTC m=+1953.516890592" watchObservedRunningTime="2026-02-17 00:55:27.503684807 +0000 UTC m=+1953.519494195" Feb 17 00:55:27 crc kubenswrapper[4805]: E0217 00:55:27.787437 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:55:35 crc kubenswrapper[4805]: E0217 00:55:35.788486 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:55:40 crc kubenswrapper[4805]: E0217 00:55:40.787705 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:55:49 crc kubenswrapper[4805]: I0217 00:55:49.062451 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-t5rdz"] Feb 17 00:55:49 crc kubenswrapper[4805]: I0217 00:55:49.071475 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-a86f-account-create-update-bhfpf"] Feb 17 00:55:49 crc kubenswrapper[4805]: I0217 00:55:49.078933 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-t5rdz"] Feb 17 00:55:49 crc kubenswrapper[4805]: I0217 00:55:49.086460 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-a86f-account-create-update-bhfpf"] Feb 17 00:55:50 crc kubenswrapper[4805]: I0217 00:55:50.031957 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2rpd"] Feb 17 00:55:50 crc kubenswrapper[4805]: I0217 00:55:50.041580 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-l2rpd"] Feb 17 00:55:50 crc kubenswrapper[4805]: E0217 00:55:50.790423 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:55:50 crc kubenswrapper[4805]: I0217 00:55:50.820769 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9acd80-9e5b-4608-89e4-24ec65d4740e" path="/var/lib/kubelet/pods/1b9acd80-9e5b-4608-89e4-24ec65d4740e/volumes" Feb 17 00:55:50 crc kubenswrapper[4805]: I0217 00:55:50.822855 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e200cb5-e7c9-416c-857b-71caf2b00de3" path="/var/lib/kubelet/pods/2e200cb5-e7c9-416c-857b-71caf2b00de3/volumes" Feb 17 00:55:50 crc kubenswrapper[4805]: I0217 00:55:50.824765 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2174d96-6433-4a4d-9f5a-ebd2f9088bd8" path="/var/lib/kubelet/pods/e2174d96-6433-4a4d-9f5a-ebd2f9088bd8/volumes" Feb 17 00:55:52 crc kubenswrapper[4805]: E0217 00:55:52.794413 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:55:53 crc kubenswrapper[4805]: I0217 00:55:53.076736 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:55:53 crc kubenswrapper[4805]: I0217 00:55:53.076806 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:56:00 crc kubenswrapper[4805]: I0217 00:56:00.053046 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rb4bb"] Feb 17 00:56:00 crc kubenswrapper[4805]: I0217 00:56:00.062489 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rb4bb"] Feb 17 00:56:00 crc kubenswrapper[4805]: I0217 00:56:00.806317 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10155d2c-a497-44a7-9cbd-c7023421781f" path="/var/lib/kubelet/pods/10155d2c-a497-44a7-9cbd-c7023421781f/volumes" Feb 17 00:56:03 crc kubenswrapper[4805]: E0217 00:56:03.787877 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:56:04 crc kubenswrapper[4805]: E0217 00:56:04.800861 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:56:04 crc kubenswrapper[4805]: I0217 00:56:04.931793 4805 generic.go:334] "Generic (PLEG): container finished" podID="7077a918-ba16-4a9a-90c5-3fcf25331039" containerID="ff73b114e06ffb4db5f1e4a9a4d4996f9b191336b491a65c920cc2407633d83b" exitCode=0 Feb 17 00:56:04 crc kubenswrapper[4805]: I0217 00:56:04.931852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" event={"ID":"7077a918-ba16-4a9a-90c5-3fcf25331039","Type":"ContainerDied","Data":"ff73b114e06ffb4db5f1e4a9a4d4996f9b191336b491a65c920cc2407633d83b"} Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.457447 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.571191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.571438 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.571466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572121 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572238 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572358 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572385 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572472 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572533 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.572650 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx9nf\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf\") pod \"7077a918-ba16-4a9a-90c5-3fcf25331039\" (UID: \"7077a918-ba16-4a9a-90c5-3fcf25331039\") " Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.579465 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf" (OuterVolumeSpecName: "kube-api-access-cx9nf") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "kube-api-access-cx9nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.580498 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.580607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.581098 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.581200 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.582568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.583212 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.586610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.588271 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.596940 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.597214 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.610608 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.652735 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory" (OuterVolumeSpecName: "inventory") pod "7077a918-ba16-4a9a-90c5-3fcf25331039" (UID: "7077a918-ba16-4a9a-90c5-3fcf25331039"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676751 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676794 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx9nf\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-kube-api-access-cx9nf\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676803 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676812 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676822 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676832 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676842 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676854 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676865 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676874 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676883 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676891 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7077a918-ba16-4a9a-90c5-3fcf25331039-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.676901 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7077a918-ba16-4a9a-90c5-3fcf25331039-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.952489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" event={"ID":"7077a918-ba16-4a9a-90c5-3fcf25331039","Type":"ContainerDied","Data":"d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db"} Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.952543 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5sq65" Feb 17 00:56:06 crc kubenswrapper[4805]: I0217 00:56:06.952549 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98c6c0808dec9a08cea0c7406ed284c886c72eae35e3e3652720467a85338db" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.092056 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw"] Feb 17 00:56:07 crc kubenswrapper[4805]: E0217 00:56:07.092743 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7077a918-ba16-4a9a-90c5-3fcf25331039" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.092775 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7077a918-ba16-4a9a-90c5-3fcf25331039" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.093100 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7077a918-ba16-4a9a-90c5-3fcf25331039" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.094277 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.097152 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.098878 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.099119 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.099318 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.099579 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.121441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw"] Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.186855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.187263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpnkz\" (UniqueName: \"kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.187367 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.187637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.187929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.289283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.289363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.289428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpnkz\" (UniqueName: \"kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.289449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.289510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.290908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.294217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.294362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.295962 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.307902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpnkz\" (UniqueName: \"kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ztjgw\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:07 crc kubenswrapper[4805]: I0217 00:56:07.421837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:56:08 crc kubenswrapper[4805]: I0217 00:56:08.028974 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw"] Feb 17 00:56:08 crc kubenswrapper[4805]: W0217 00:56:08.035154 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a8a3709_95d6_48e3_94bb_b41bb5ed017c.slice/crio-a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791 WatchSource:0}: Error finding container a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791: Status 404 returned error can't find the container with id a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791 Feb 17 00:56:08 crc kubenswrapper[4805]: I0217 00:56:08.983876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" event={"ID":"0a8a3709-95d6-48e3-94bb-b41bb5ed017c","Type":"ContainerStarted","Data":"bc4f7a82283f05bc26e3c645a791f7eebbb81dec51011fcf04cd1d077a6355b7"} Feb 17 00:56:08 crc kubenswrapper[4805]: I0217 00:56:08.984246 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" event={"ID":"0a8a3709-95d6-48e3-94bb-b41bb5ed017c","Type":"ContainerStarted","Data":"a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791"} Feb 17 00:56:09 crc kubenswrapper[4805]: I0217 00:56:09.008937 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" podStartSLOduration=1.5644551 podStartE2EDuration="2.008913271s" podCreationTimestamp="2026-02-17 00:56:07 +0000 UTC" firstStartedPulling="2026-02-17 00:56:08.03880949 +0000 UTC m=+1994.054618888" lastFinishedPulling="2026-02-17 00:56:08.483267621 +0000 UTC m=+1994.499077059" observedRunningTime="2026-02-17 00:56:09.005206946 +0000 UTC m=+1995.021016384" watchObservedRunningTime="2026-02-17 00:56:09.008913271 +0000 UTC m=+1995.024722669" Feb 17 00:56:15 crc kubenswrapper[4805]: I0217 00:56:15.043415 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7b5v8"] Feb 17 00:56:15 crc kubenswrapper[4805]: I0217 00:56:15.055841 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7b5v8"] Feb 17 00:56:16 crc kubenswrapper[4805]: E0217 00:56:16.787474 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:56:16 crc kubenswrapper[4805]: I0217 00:56:16.800947 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2db1e4-4262-4a81-83fe-a9b9f0565beb" path="/var/lib/kubelet/pods/1c2db1e4-4262-4a81-83fe-a9b9f0565beb/volumes" Feb 17 00:56:17 crc kubenswrapper[4805]: I0217 00:56:17.040889 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jbwzz"] Feb 17 00:56:17 crc kubenswrapper[4805]: I0217 00:56:17.057142 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jbwzz"] Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.319485 4805 scope.go:117] "RemoveContainer" containerID="67ead1324c592bb2f9282dd9c7338d7c0af707b13e79925de758fc59f823933a" Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.365180 4805 scope.go:117] "RemoveContainer" containerID="09de379b8f5db063b29dd5eab57f4e0d9c4565882e5a42d14afd344c6835f6ec" Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.464863 4805 scope.go:117] "RemoveContainer" containerID="49fb302df7845bc1cad0e323dac46a516f0ac83b0d976718a49d4d4a0252f981" Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.514883 4805 scope.go:117] "RemoveContainer" containerID="8410fe2ae9be1281827b99be50277ebb72bd084c8b661a3b72db40b46851bc94" Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.586167 4805 scope.go:117] "RemoveContainer" containerID="8a8e3a3cf7f7794e6e3728588ef954dc866e44e0f7dd4b062f7071342adcca5c" Feb 17 00:56:18 crc kubenswrapper[4805]: E0217 00:56:18.798055 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:56:18 crc kubenswrapper[4805]: I0217 00:56:18.809773 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de33584-3604-4b64-ae95-9d18066a35a6" path="/var/lib/kubelet/pods/3de33584-3604-4b64-ae95-9d18066a35a6/volumes" Feb 17 00:56:23 crc kubenswrapper[4805]: I0217 00:56:23.077060 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:56:23 crc kubenswrapper[4805]: I0217 00:56:23.077818 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:56:23 crc kubenswrapper[4805]: I0217 00:56:23.077894 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:56:23 crc kubenswrapper[4805]: I0217 00:56:23.079223 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:56:23 crc kubenswrapper[4805]: I0217 00:56:23.079376 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d" gracePeriod=600 Feb 17 00:56:24 crc kubenswrapper[4805]: I0217 00:56:24.162900 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d" exitCode=0 Feb 17 00:56:24 crc kubenswrapper[4805]: I0217 00:56:24.163031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d"} Feb 17 00:56:24 crc kubenswrapper[4805]: I0217 00:56:24.163536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e"} Feb 17 00:56:24 crc kubenswrapper[4805]: I0217 00:56:24.163567 4805 scope.go:117] "RemoveContainer" containerID="7dc52887af1c26a424f35ddcecc2b65d0ae5f8a595032319aca80ecd9682290b" Feb 17 00:56:31 crc kubenswrapper[4805]: E0217 00:56:31.791139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:56:31 crc kubenswrapper[4805]: E0217 00:56:31.791354 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:56:44 crc kubenswrapper[4805]: E0217 00:56:44.810762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:56:45 crc kubenswrapper[4805]: E0217 00:56:45.787109 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:56:57 crc kubenswrapper[4805]: E0217 00:56:57.786878 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:56:59 crc kubenswrapper[4805]: E0217 00:56:59.787169 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:57:00 crc kubenswrapper[4805]: I0217 00:57:00.053199 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lhxq"] Feb 17 00:57:00 crc kubenswrapper[4805]: I0217 00:57:00.064470 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lhxq"] Feb 17 00:57:00 crc kubenswrapper[4805]: I0217 00:57:00.834164 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4" path="/var/lib/kubelet/pods/283dbd3a-e5ca-4e5f-beb2-59c9498f0fb4/volumes" Feb 17 00:57:10 crc kubenswrapper[4805]: E0217 00:57:10.791283 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:57:13 crc kubenswrapper[4805]: E0217 00:57:13.788057 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:57:18 crc kubenswrapper[4805]: I0217 00:57:18.838019 4805 scope.go:117] "RemoveContainer" containerID="fde80f8efc7c4b6e4801b99af9d81b1bf763d9ffb205267c5e8bb2b173764ae9" Feb 17 00:57:18 crc kubenswrapper[4805]: I0217 00:57:18.899031 4805 scope.go:117] "RemoveContainer" containerID="5f0cd60d7fbd48c58c1edeb30fe4192e14f9dd1277a35fa9a671b5eb210a3f7d" Feb 17 00:57:21 crc kubenswrapper[4805]: E0217 00:57:21.786440 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:57:25 crc kubenswrapper[4805]: I0217 00:57:25.902854 4805 generic.go:334] "Generic (PLEG): container finished" podID="0a8a3709-95d6-48e3-94bb-b41bb5ed017c" containerID="bc4f7a82283f05bc26e3c645a791f7eebbb81dec51011fcf04cd1d077a6355b7" exitCode=0 Feb 17 00:57:25 crc kubenswrapper[4805]: I0217 00:57:25.902893 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" event={"ID":"0a8a3709-95d6-48e3-94bb-b41bb5ed017c","Type":"ContainerDied","Data":"bc4f7a82283f05bc26e3c645a791f7eebbb81dec51011fcf04cd1d077a6355b7"} Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.402865 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.540683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0\") pod \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.540814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle\") pod \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.540892 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpnkz\" (UniqueName: \"kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz\") pod \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.540913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam\") pod \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.541690 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory\") pod \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\" (UID: \"0a8a3709-95d6-48e3-94bb-b41bb5ed017c\") " Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.546351 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0a8a3709-95d6-48e3-94bb-b41bb5ed017c" (UID: "0a8a3709-95d6-48e3-94bb-b41bb5ed017c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.547802 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz" (OuterVolumeSpecName: "kube-api-access-gpnkz") pod "0a8a3709-95d6-48e3-94bb-b41bb5ed017c" (UID: "0a8a3709-95d6-48e3-94bb-b41bb5ed017c"). InnerVolumeSpecName "kube-api-access-gpnkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.570149 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0a8a3709-95d6-48e3-94bb-b41bb5ed017c" (UID: "0a8a3709-95d6-48e3-94bb-b41bb5ed017c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.577386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory" (OuterVolumeSpecName: "inventory") pod "0a8a3709-95d6-48e3-94bb-b41bb5ed017c" (UID: "0a8a3709-95d6-48e3-94bb-b41bb5ed017c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.582048 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a8a3709-95d6-48e3-94bb-b41bb5ed017c" (UID: "0a8a3709-95d6-48e3-94bb-b41bb5ed017c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.644988 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.645264 4805 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.645447 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.645617 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpnkz\" (UniqueName: \"kubernetes.io/projected/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-kube-api-access-gpnkz\") on node \"crc\" DevicePath \"\"" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.645753 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a8a3709-95d6-48e3-94bb-b41bb5ed017c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 00:57:27 crc kubenswrapper[4805]: E0217 00:57:27.788189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.949964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" event={"ID":"0a8a3709-95d6-48e3-94bb-b41bb5ed017c","Type":"ContainerDied","Data":"a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791"} Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.950021 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b11cfbd4f426f42c77d6727a6baed260e46c7382d87831bf1fb2b83518b791" Feb 17 00:57:27 crc kubenswrapper[4805]: I0217 00:57:27.950414 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ztjgw" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.146190 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7"] Feb 17 00:57:28 crc kubenswrapper[4805]: E0217 00:57:28.146730 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8a3709-95d6-48e3-94bb-b41bb5ed017c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.146749 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8a3709-95d6-48e3-94bb-b41bb5ed017c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.147095 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8a3709-95d6-48e3-94bb-b41bb5ed017c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.147957 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.152590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.153153 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.153539 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.154806 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.159720 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.186088 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7"] Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.262030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.262186 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.262276 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.262435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.262529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4d8l\" (UniqueName: \"kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.364537 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.364828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.364999 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.365176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4d8l\" (UniqueName: \"kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.365453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.371582 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.372817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.374531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.378395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.395141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4d8l\" (UniqueName: \"kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86ss7\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:28 crc kubenswrapper[4805]: I0217 00:57:28.496034 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 00:57:29 crc kubenswrapper[4805]: I0217 00:57:29.092716 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7"] Feb 17 00:57:29 crc kubenswrapper[4805]: I0217 00:57:29.976217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" event={"ID":"4a95c358-9f7f-42e7-b497-7f9f76dc01ce","Type":"ContainerStarted","Data":"e070247f14c9fcbdba5634a6710edef15ec3f07ab95ddae9e5a142cb270ad52e"} Feb 17 00:57:30 crc kubenswrapper[4805]: I0217 00:57:30.990309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" event={"ID":"4a95c358-9f7f-42e7-b497-7f9f76dc01ce","Type":"ContainerStarted","Data":"d2a787b3bda38f9973c51e977f871419fac1280b03aac9caa2994136d9f4c38b"} Feb 17 00:57:31 crc kubenswrapper[4805]: I0217 00:57:31.015143 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" podStartSLOduration=2.411688936 podStartE2EDuration="3.015118283s" podCreationTimestamp="2026-02-17 00:57:28 +0000 UTC" firstStartedPulling="2026-02-17 00:57:29.100048267 +0000 UTC m=+2075.115857705" lastFinishedPulling="2026-02-17 00:57:29.703477644 +0000 UTC m=+2075.719287052" observedRunningTime="2026-02-17 00:57:31.003874619 +0000 UTC m=+2077.019684017" watchObservedRunningTime="2026-02-17 00:57:31.015118283 +0000 UTC m=+2077.030927711" Feb 17 00:57:33 crc kubenswrapper[4805]: E0217 00:57:33.787807 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:57:38 crc kubenswrapper[4805]: E0217 00:57:38.787942 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:57:48 crc kubenswrapper[4805]: E0217 00:57:48.788703 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:57:53 crc kubenswrapper[4805]: E0217 00:57:53.786958 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:57:59 crc kubenswrapper[4805]: E0217 00:57:59.787973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:58:07 crc kubenswrapper[4805]: E0217 00:58:07.787168 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:58:13 crc kubenswrapper[4805]: E0217 00:58:13.786927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:58:22 crc kubenswrapper[4805]: I0217 00:58:22.789074 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:58:22 crc kubenswrapper[4805]: E0217 00:58:22.921485 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:58:22 crc kubenswrapper[4805]: E0217 00:58:22.921557 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 00:58:22 crc kubenswrapper[4805]: E0217 00:58:22.921681 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:58:22 crc kubenswrapper[4805]: E0217 00:58:22.922897 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:58:23 crc kubenswrapper[4805]: I0217 00:58:23.077142 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:58:23 crc kubenswrapper[4805]: I0217 00:58:23.077202 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:58:28 crc kubenswrapper[4805]: E0217 00:58:28.917043 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:58:28 crc kubenswrapper[4805]: E0217 00:58:28.917847 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 00:58:28 crc kubenswrapper[4805]: E0217 00:58:28.918026 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 00:58:28 crc kubenswrapper[4805]: E0217 00:58:28.919285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:58:37 crc kubenswrapper[4805]: E0217 00:58:37.788839 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:58:42 crc kubenswrapper[4805]: E0217 00:58:42.786981 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:58:52 crc kubenswrapper[4805]: E0217 00:58:52.787293 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:58:53 crc kubenswrapper[4805]: I0217 00:58:53.076762 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:58:53 crc kubenswrapper[4805]: I0217 00:58:53.076820 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:58:54 crc kubenswrapper[4805]: E0217 00:58:54.823525 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:59:04 crc kubenswrapper[4805]: E0217 00:59:04.797876 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:59:04 crc kubenswrapper[4805]: I0217 00:59:04.997635 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:04 crc kubenswrapper[4805]: I0217 00:59:04.999848 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.013205 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.110578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.110664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.110739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr66\" (UniqueName: \"kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.213084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr66\" (UniqueName: \"kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.213632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.214273 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.214438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.215077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.242010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr66\" (UniqueName: \"kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66\") pod \"certified-operators-2dnnj\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.349516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:05 crc kubenswrapper[4805]: I0217 00:59:05.919962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:06 crc kubenswrapper[4805]: I0217 00:59:06.232168 4805 generic.go:334] "Generic (PLEG): container finished" podID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerID="183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95" exitCode=0 Feb 17 00:59:06 crc kubenswrapper[4805]: I0217 00:59:06.232208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerDied","Data":"183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95"} Feb 17 00:59:06 crc kubenswrapper[4805]: I0217 00:59:06.232232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerStarted","Data":"d4be97047d091f1e6d9db840c22ec02c86c1d3d3a5bd7df7ac4ad1bd969ce308"} Feb 17 00:59:07 crc kubenswrapper[4805]: E0217 00:59:07.786586 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:59:08 crc kubenswrapper[4805]: I0217 00:59:08.264318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerStarted","Data":"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd"} Feb 17 00:59:09 crc kubenswrapper[4805]: I0217 00:59:09.273906 4805 generic.go:334] "Generic (PLEG): container finished" podID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerID="eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd" exitCode=0 Feb 17 00:59:09 crc kubenswrapper[4805]: I0217 00:59:09.273962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerDied","Data":"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd"} Feb 17 00:59:10 crc kubenswrapper[4805]: I0217 00:59:10.285838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerStarted","Data":"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6"} Feb 17 00:59:10 crc kubenswrapper[4805]: I0217 00:59:10.318878 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dnnj" podStartSLOduration=2.853340077 podStartE2EDuration="6.318858394s" podCreationTimestamp="2026-02-17 00:59:04 +0000 UTC" firstStartedPulling="2026-02-17 00:59:06.234053334 +0000 UTC m=+2172.249862732" lastFinishedPulling="2026-02-17 00:59:09.699571641 +0000 UTC m=+2175.715381049" observedRunningTime="2026-02-17 00:59:10.310827213 +0000 UTC m=+2176.326636621" watchObservedRunningTime="2026-02-17 00:59:10.318858394 +0000 UTC m=+2176.334667812" Feb 17 00:59:15 crc kubenswrapper[4805]: I0217 00:59:15.349686 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:15 crc kubenswrapper[4805]: I0217 00:59:15.351992 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:15 crc kubenswrapper[4805]: I0217 00:59:15.428719 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:16 crc kubenswrapper[4805]: I0217 00:59:16.455800 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:16 crc kubenswrapper[4805]: I0217 00:59:16.533764 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.392945 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dnnj" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="registry-server" containerID="cri-o://7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6" gracePeriod=2 Feb 17 00:59:18 crc kubenswrapper[4805]: E0217 00:59:18.791023 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.955518 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.969560 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content\") pod \"5740a6a5-b1ee-4169-b8f8-309892ffb118\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.969962 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities\") pod \"5740a6a5-b1ee-4169-b8f8-309892ffb118\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.970048 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgr66\" (UniqueName: \"kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66\") pod \"5740a6a5-b1ee-4169-b8f8-309892ffb118\" (UID: \"5740a6a5-b1ee-4169-b8f8-309892ffb118\") " Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.970756 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities" (OuterVolumeSpecName: "utilities") pod "5740a6a5-b1ee-4169-b8f8-309892ffb118" (UID: "5740a6a5-b1ee-4169-b8f8-309892ffb118"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.971184 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:18 crc kubenswrapper[4805]: I0217 00:59:18.977971 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66" (OuterVolumeSpecName: "kube-api-access-mgr66") pod "5740a6a5-b1ee-4169-b8f8-309892ffb118" (UID: "5740a6a5-b1ee-4169-b8f8-309892ffb118"). InnerVolumeSpecName "kube-api-access-mgr66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.034381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5740a6a5-b1ee-4169-b8f8-309892ffb118" (UID: "5740a6a5-b1ee-4169-b8f8-309892ffb118"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.073757 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgr66\" (UniqueName: \"kubernetes.io/projected/5740a6a5-b1ee-4169-b8f8-309892ffb118-kube-api-access-mgr66\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.073825 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5740a6a5-b1ee-4169-b8f8-309892ffb118-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.407285 4805 generic.go:334] "Generic (PLEG): container finished" podID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerID="7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6" exitCode=0 Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.407401 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dnnj" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.407396 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerDied","Data":"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6"} Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.407631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dnnj" event={"ID":"5740a6a5-b1ee-4169-b8f8-309892ffb118","Type":"ContainerDied","Data":"d4be97047d091f1e6d9db840c22ec02c86c1d3d3a5bd7df7ac4ad1bd969ce308"} Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.407677 4805 scope.go:117] "RemoveContainer" containerID="7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.439711 4805 scope.go:117] "RemoveContainer" containerID="eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.486894 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.494382 4805 scope.go:117] "RemoveContainer" containerID="183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.499036 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dnnj"] Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.537159 4805 scope.go:117] "RemoveContainer" containerID="7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6" Feb 17 00:59:19 crc kubenswrapper[4805]: E0217 00:59:19.538021 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6\": container with ID starting with 7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6 not found: ID does not exist" containerID="7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.538087 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6"} err="failed to get container status \"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6\": rpc error: code = NotFound desc = could not find container \"7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6\": container with ID starting with 7ffff7955cff3d5b07e183b8d38aedce2abc31ac23629760ca17a0cf03535ae6 not found: ID does not exist" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.538117 4805 scope.go:117] "RemoveContainer" containerID="eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd" Feb 17 00:59:19 crc kubenswrapper[4805]: E0217 00:59:19.538644 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd\": container with ID starting with eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd not found: ID does not exist" containerID="eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.538721 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd"} err="failed to get container status \"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd\": rpc error: code = NotFound desc = could not find container \"eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd\": container with ID starting with eeb1f26d6454e230cddcb75552d8b76e278cc8c50ee8c659d25c8a91f4119ebd not found: ID does not exist" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.538768 4805 scope.go:117] "RemoveContainer" containerID="183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95" Feb 17 00:59:19 crc kubenswrapper[4805]: E0217 00:59:19.539141 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95\": container with ID starting with 183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95 not found: ID does not exist" containerID="183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95" Feb 17 00:59:19 crc kubenswrapper[4805]: I0217 00:59:19.539179 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95"} err="failed to get container status \"183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95\": rpc error: code = NotFound desc = could not find container \"183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95\": container with ID starting with 183e976297ed1135a31637dd01071a9e8da8fc3fa9315180efc10c80c221dc95 not found: ID does not exist" Feb 17 00:59:20 crc kubenswrapper[4805]: I0217 00:59:20.797904 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" path="/var/lib/kubelet/pods/5740a6a5-b1ee-4169-b8f8-309892ffb118/volumes" Feb 17 00:59:22 crc kubenswrapper[4805]: E0217 00:59:22.789859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.076745 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.077109 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.077176 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.078147 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.078254 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" gracePeriod=600 Feb 17 00:59:23 crc kubenswrapper[4805]: E0217 00:59:23.205279 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.461205 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" exitCode=0 Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.461251 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e"} Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.461285 4805 scope.go:117] "RemoveContainer" containerID="2e9087c41c20ceb94baae00268714860eae0b0c62339840278c0c8161853155d" Feb 17 00:59:23 crc kubenswrapper[4805]: I0217 00:59:23.462450 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 00:59:23 crc kubenswrapper[4805]: E0217 00:59:23.462980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.858485 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:31 crc kubenswrapper[4805]: E0217 00:59:31.859789 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="extract-utilities" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.859814 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="extract-utilities" Feb 17 00:59:31 crc kubenswrapper[4805]: E0217 00:59:31.859840 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="extract-content" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.859853 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="extract-content" Feb 17 00:59:31 crc kubenswrapper[4805]: E0217 00:59:31.859881 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="registry-server" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.859893 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="registry-server" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.860365 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5740a6a5-b1ee-4169-b8f8-309892ffb118" containerName="registry-server" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.864501 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.878315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.986146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.986356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:31 crc kubenswrapper[4805]: I0217 00:59:31.986515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7kqf\" (UniqueName: \"kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.087931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7kqf\" (UniqueName: \"kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.088150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.088211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.088800 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.088859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.123943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7kqf\" (UniqueName: \"kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf\") pod \"redhat-marketplace-t4xl6\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.203354 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:32 crc kubenswrapper[4805]: I0217 00:59:32.731049 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:32 crc kubenswrapper[4805]: W0217 00:59:32.742682 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod472ad06d_d06a_4335_a41a_d96504b824a4.slice/crio-487ece505e145117a1becd2338b24d79085c30910ead945d3bf407ed7d6eaeb7 WatchSource:0}: Error finding container 487ece505e145117a1becd2338b24d79085c30910ead945d3bf407ed7d6eaeb7: Status 404 returned error can't find the container with id 487ece505e145117a1becd2338b24d79085c30910ead945d3bf407ed7d6eaeb7 Feb 17 00:59:32 crc kubenswrapper[4805]: E0217 00:59:32.787113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:59:33 crc kubenswrapper[4805]: E0217 00:59:33.277135 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod472ad06d_d06a_4335_a41a_d96504b824a4.slice/crio-conmon-1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod472ad06d_d06a_4335_a41a_d96504b824a4.slice/crio-1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:59:33 crc kubenswrapper[4805]: I0217 00:59:33.593071 4805 generic.go:334] "Generic (PLEG): container finished" podID="472ad06d-d06a-4335-a41a-d96504b824a4" containerID="1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea" exitCode=0 Feb 17 00:59:33 crc kubenswrapper[4805]: I0217 00:59:33.593116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerDied","Data":"1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea"} Feb 17 00:59:33 crc kubenswrapper[4805]: I0217 00:59:33.593145 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerStarted","Data":"487ece505e145117a1becd2338b24d79085c30910ead945d3bf407ed7d6eaeb7"} Feb 17 00:59:34 crc kubenswrapper[4805]: I0217 00:59:34.605260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerStarted","Data":"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e"} Feb 17 00:59:35 crc kubenswrapper[4805]: I0217 00:59:35.618771 4805 generic.go:334] "Generic (PLEG): container finished" podID="472ad06d-d06a-4335-a41a-d96504b824a4" containerID="8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e" exitCode=0 Feb 17 00:59:35 crc kubenswrapper[4805]: I0217 00:59:35.618887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerDied","Data":"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e"} Feb 17 00:59:36 crc kubenswrapper[4805]: I0217 00:59:36.634025 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerStarted","Data":"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f"} Feb 17 00:59:36 crc kubenswrapper[4805]: I0217 00:59:36.688096 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4xl6" podStartSLOduration=3.299916709 podStartE2EDuration="5.68807594s" podCreationTimestamp="2026-02-17 00:59:31 +0000 UTC" firstStartedPulling="2026-02-17 00:59:33.602275782 +0000 UTC m=+2199.618085180" lastFinishedPulling="2026-02-17 00:59:35.990434983 +0000 UTC m=+2202.006244411" observedRunningTime="2026-02-17 00:59:36.665235193 +0000 UTC m=+2202.681044611" watchObservedRunningTime="2026-02-17 00:59:36.68807594 +0000 UTC m=+2202.703885348" Feb 17 00:59:36 crc kubenswrapper[4805]: E0217 00:59:36.794677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:59:37 crc kubenswrapper[4805]: I0217 00:59:37.784518 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 00:59:37 crc kubenswrapper[4805]: E0217 00:59:37.785594 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:59:42 crc kubenswrapper[4805]: I0217 00:59:42.205730 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:42 crc kubenswrapper[4805]: I0217 00:59:42.206407 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:42 crc kubenswrapper[4805]: I0217 00:59:42.287973 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:42 crc kubenswrapper[4805]: I0217 00:59:42.841617 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:42 crc kubenswrapper[4805]: I0217 00:59:42.941073 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:44 crc kubenswrapper[4805]: I0217 00:59:44.754873 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4xl6" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="registry-server" containerID="cri-o://985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f" gracePeriod=2 Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.286423 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.399878 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7kqf\" (UniqueName: \"kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf\") pod \"472ad06d-d06a-4335-a41a-d96504b824a4\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.400240 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content\") pod \"472ad06d-d06a-4335-a41a-d96504b824a4\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.401486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities\") pod \"472ad06d-d06a-4335-a41a-d96504b824a4\" (UID: \"472ad06d-d06a-4335-a41a-d96504b824a4\") " Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.403231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities" (OuterVolumeSpecName: "utilities") pod "472ad06d-d06a-4335-a41a-d96504b824a4" (UID: "472ad06d-d06a-4335-a41a-d96504b824a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.409573 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf" (OuterVolumeSpecName: "kube-api-access-x7kqf") pod "472ad06d-d06a-4335-a41a-d96504b824a4" (UID: "472ad06d-d06a-4335-a41a-d96504b824a4"). InnerVolumeSpecName "kube-api-access-x7kqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.442842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "472ad06d-d06a-4335-a41a-d96504b824a4" (UID: "472ad06d-d06a-4335-a41a-d96504b824a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.504545 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7kqf\" (UniqueName: \"kubernetes.io/projected/472ad06d-d06a-4335-a41a-d96504b824a4-kube-api-access-x7kqf\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.504598 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.504617 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472ad06d-d06a-4335-a41a-d96504b824a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.768317 4805 generic.go:334] "Generic (PLEG): container finished" podID="472ad06d-d06a-4335-a41a-d96504b824a4" containerID="985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f" exitCode=0 Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.768399 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerDied","Data":"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f"} Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.768435 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4xl6" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.769497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4xl6" event={"ID":"472ad06d-d06a-4335-a41a-d96504b824a4","Type":"ContainerDied","Data":"487ece505e145117a1becd2338b24d79085c30910ead945d3bf407ed7d6eaeb7"} Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.769515 4805 scope.go:117] "RemoveContainer" containerID="985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.800480 4805 scope.go:117] "RemoveContainer" containerID="8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.824410 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.835093 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4xl6"] Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.836632 4805 scope.go:117] "RemoveContainer" containerID="1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.884476 4805 scope.go:117] "RemoveContainer" containerID="985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f" Feb 17 00:59:45 crc kubenswrapper[4805]: E0217 00:59:45.885055 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f\": container with ID starting with 985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f not found: ID does not exist" containerID="985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.885102 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f"} err="failed to get container status \"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f\": rpc error: code = NotFound desc = could not find container \"985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f\": container with ID starting with 985b71d089a0bdc1f7b35b82bcbf52f1973872a1762d27adccc2ca6016a51b4f not found: ID does not exist" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.885143 4805 scope.go:117] "RemoveContainer" containerID="8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e" Feb 17 00:59:45 crc kubenswrapper[4805]: E0217 00:59:45.885610 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e\": container with ID starting with 8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e not found: ID does not exist" containerID="8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.885639 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e"} err="failed to get container status \"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e\": rpc error: code = NotFound desc = could not find container \"8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e\": container with ID starting with 8dedb5b1f20a8bb55086c9567940421a64aecb304511b7d23112a3516ca0fc1e not found: ID does not exist" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.885655 4805 scope.go:117] "RemoveContainer" containerID="1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea" Feb 17 00:59:45 crc kubenswrapper[4805]: E0217 00:59:45.886081 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea\": container with ID starting with 1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea not found: ID does not exist" containerID="1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea" Feb 17 00:59:45 crc kubenswrapper[4805]: I0217 00:59:45.886119 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea"} err="failed to get container status \"1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea\": rpc error: code = NotFound desc = could not find container \"1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea\": container with ID starting with 1e19f72e594cdc583d0ca6616dd03aafdc441dd43f8d86bee1e94bf48af50cea not found: ID does not exist" Feb 17 00:59:46 crc kubenswrapper[4805]: I0217 00:59:46.807649 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" path="/var/lib/kubelet/pods/472ad06d-d06a-4335-a41a-d96504b824a4/volumes" Feb 17 00:59:47 crc kubenswrapper[4805]: E0217 00:59:47.787596 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 00:59:48 crc kubenswrapper[4805]: E0217 00:59:48.788066 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 00:59:51 crc kubenswrapper[4805]: I0217 00:59:51.787048 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 00:59:51 crc kubenswrapper[4805]: E0217 00:59:51.787970 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 00:59:58 crc kubenswrapper[4805]: E0217 00:59:58.790500 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.173521 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b"] Feb 17 01:00:00 crc kubenswrapper[4805]: E0217 01:00:00.174055 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="registry-server" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.174069 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="registry-server" Feb 17 01:00:00 crc kubenswrapper[4805]: E0217 01:00:00.174091 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="extract-content" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.174097 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="extract-content" Feb 17 01:00:00 crc kubenswrapper[4805]: E0217 01:00:00.174113 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="extract-utilities" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.174136 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="extract-utilities" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.174360 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="472ad06d-d06a-4335-a41a-d96504b824a4" containerName="registry-server" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.175362 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.177996 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.178837 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.185471 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b"] Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.258313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.258647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.258781 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vltg\" (UniqueName: \"kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.361118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.361285 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vltg\" (UniqueName: \"kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.361384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.362669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.371572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.387728 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vltg\" (UniqueName: \"kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg\") pod \"collect-profiles-29521500-sz97b\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: I0217 01:00:00.508306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:00 crc kubenswrapper[4805]: E0217 01:00:00.787294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:00:01 crc kubenswrapper[4805]: I0217 01:00:01.004319 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b"] Feb 17 01:00:01 crc kubenswrapper[4805]: I0217 01:00:01.985348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" event={"ID":"d7176d28-cd1d-455f-b31a-69211b464bf1","Type":"ContainerStarted","Data":"f9e09533c797f23ff2934b9ae5ca8a4036ab5de4d92decbeafceb6ed58ea1ec8"} Feb 17 01:00:01 crc kubenswrapper[4805]: I0217 01:00:01.985688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" event={"ID":"d7176d28-cd1d-455f-b31a-69211b464bf1","Type":"ContainerStarted","Data":"a44ffc4c697f6d1183494a2f44094632143d4c3dbc073f3afe9efe3588d3d0c0"} Feb 17 01:00:02 crc kubenswrapper[4805]: I0217 01:00:02.005887 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" podStartSLOduration=2.005869884 podStartE2EDuration="2.005869884s" podCreationTimestamp="2026-02-17 01:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 01:00:02.004531315 +0000 UTC m=+2228.020340713" watchObservedRunningTime="2026-02-17 01:00:02.005869884 +0000 UTC m=+2228.021679282" Feb 17 01:00:02 crc kubenswrapper[4805]: I0217 01:00:02.998101 4805 generic.go:334] "Generic (PLEG): container finished" podID="d7176d28-cd1d-455f-b31a-69211b464bf1" containerID="f9e09533c797f23ff2934b9ae5ca8a4036ab5de4d92decbeafceb6ed58ea1ec8" exitCode=0 Feb 17 01:00:02 crc kubenswrapper[4805]: I0217 01:00:02.998208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" event={"ID":"d7176d28-cd1d-455f-b31a-69211b464bf1","Type":"ContainerDied","Data":"f9e09533c797f23ff2934b9ae5ca8a4036ab5de4d92decbeafceb6ed58ea1ec8"} Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.437700 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.574972 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume\") pod \"d7176d28-cd1d-455f-b31a-69211b464bf1\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.575180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume\") pod \"d7176d28-cd1d-455f-b31a-69211b464bf1\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.575263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vltg\" (UniqueName: \"kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg\") pod \"d7176d28-cd1d-455f-b31a-69211b464bf1\" (UID: \"d7176d28-cd1d-455f-b31a-69211b464bf1\") " Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.576076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7176d28-cd1d-455f-b31a-69211b464bf1" (UID: "d7176d28-cd1d-455f-b31a-69211b464bf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.581596 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7176d28-cd1d-455f-b31a-69211b464bf1" (UID: "d7176d28-cd1d-455f-b31a-69211b464bf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.585732 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg" (OuterVolumeSpecName: "kube-api-access-2vltg") pod "d7176d28-cd1d-455f-b31a-69211b464bf1" (UID: "d7176d28-cd1d-455f-b31a-69211b464bf1"). InnerVolumeSpecName "kube-api-access-2vltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.677374 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7176d28-cd1d-455f-b31a-69211b464bf1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.677407 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7176d28-cd1d-455f-b31a-69211b464bf1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:00:04 crc kubenswrapper[4805]: I0217 01:00:04.677418 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vltg\" (UniqueName: \"kubernetes.io/projected/d7176d28-cd1d-455f-b31a-69211b464bf1-kube-api-access-2vltg\") on node \"crc\" DevicePath \"\"" Feb 17 01:00:05 crc kubenswrapper[4805]: I0217 01:00:05.022023 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" event={"ID":"d7176d28-cd1d-455f-b31a-69211b464bf1","Type":"ContainerDied","Data":"a44ffc4c697f6d1183494a2f44094632143d4c3dbc073f3afe9efe3588d3d0c0"} Feb 17 01:00:05 crc kubenswrapper[4805]: I0217 01:00:05.022434 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a44ffc4c697f6d1183494a2f44094632143d4c3dbc073f3afe9efe3588d3d0c0" Feb 17 01:00:05 crc kubenswrapper[4805]: I0217 01:00:05.022070 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b" Feb 17 01:00:05 crc kubenswrapper[4805]: I0217 01:00:05.093668 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv"] Feb 17 01:00:05 crc kubenswrapper[4805]: I0217 01:00:05.104383 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-gxtgv"] Feb 17 01:00:06 crc kubenswrapper[4805]: I0217 01:00:06.784917 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:00:06 crc kubenswrapper[4805]: E0217 01:00:06.785457 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:00:06 crc kubenswrapper[4805]: I0217 01:00:06.816337 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4208e92a-1970-441e-a265-f7459d384c6f" path="/var/lib/kubelet/pods/4208e92a-1970-441e-a265-f7459d384c6f/volumes" Feb 17 01:00:12 crc kubenswrapper[4805]: E0217 01:00:12.789104 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:00:12 crc kubenswrapper[4805]: E0217 01:00:12.789181 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:00:19 crc kubenswrapper[4805]: I0217 01:00:19.103505 4805 scope.go:117] "RemoveContainer" containerID="beed81d7ab906d5fa324cf0365e577715c440f709815693adf560b2f5efad59a" Feb 17 01:00:20 crc kubenswrapper[4805]: I0217 01:00:20.784655 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:00:20 crc kubenswrapper[4805]: E0217 01:00:20.785188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:00:23 crc kubenswrapper[4805]: E0217 01:00:23.788395 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:00:27 crc kubenswrapper[4805]: E0217 01:00:27.787235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:00:33 crc kubenswrapper[4805]: I0217 01:00:33.785182 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:00:33 crc kubenswrapper[4805]: E0217 01:00:33.786263 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:00:37 crc kubenswrapper[4805]: E0217 01:00:37.787949 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:00:41 crc kubenswrapper[4805]: E0217 01:00:41.788411 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:00:46 crc kubenswrapper[4805]: I0217 01:00:46.785461 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:00:46 crc kubenswrapper[4805]: E0217 01:00:46.786528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:00:51 crc kubenswrapper[4805]: E0217 01:00:51.789568 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:00:52 crc kubenswrapper[4805]: E0217 01:00:52.787227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.483109 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:00:53 crc kubenswrapper[4805]: E0217 01:00:53.483847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7176d28-cd1d-455f-b31a-69211b464bf1" containerName="collect-profiles" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.483882 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7176d28-cd1d-455f-b31a-69211b464bf1" containerName="collect-profiles" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.484232 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7176d28-cd1d-455f-b31a-69211b464bf1" containerName="collect-profiles" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.486732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.495092 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.576187 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.576268 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99xv\" (UniqueName: \"kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.576574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.681009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.681226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.681281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99xv\" (UniqueName: \"kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.681681 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.681795 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.709067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99xv\" (UniqueName: \"kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv\") pod \"community-operators-48b6c\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:53 crc kubenswrapper[4805]: I0217 01:00:53.821067 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:00:54 crc kubenswrapper[4805]: I0217 01:00:54.379671 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:00:54 crc kubenswrapper[4805]: I0217 01:00:54.670299 4805 generic.go:334] "Generic (PLEG): container finished" podID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerID="41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1" exitCode=0 Feb 17 01:00:54 crc kubenswrapper[4805]: I0217 01:00:54.670374 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerDied","Data":"41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1"} Feb 17 01:00:54 crc kubenswrapper[4805]: I0217 01:00:54.671885 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerStarted","Data":"e448ff02618c0175069fe206400b32d5d609f07dc19242e6df6791337f326a72"} Feb 17 01:00:55 crc kubenswrapper[4805]: I0217 01:00:55.899963 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:00:55 crc kubenswrapper[4805]: I0217 01:00:55.904543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:55 crc kubenswrapper[4805]: I0217 01:00:55.918986 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.046312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.046431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4ql\" (UniqueName: \"kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.046721 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.148451 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.148832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.148970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.149209 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.149267 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4ql\" (UniqueName: \"kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.169397 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4ql\" (UniqueName: \"kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql\") pod \"redhat-operators-k9v5t\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.243543 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:00:56 crc kubenswrapper[4805]: I0217 01:00:56.752533 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:00:57 crc kubenswrapper[4805]: I0217 01:00:57.705183 4805 generic.go:334] "Generic (PLEG): container finished" podID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerID="8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930" exitCode=0 Feb 17 01:00:57 crc kubenswrapper[4805]: I0217 01:00:57.705402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerDied","Data":"8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930"} Feb 17 01:00:57 crc kubenswrapper[4805]: I0217 01:00:57.705522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerStarted","Data":"7d9be11596d448e17084772ac3ac1a7783ca9a60ee975cb8a3b4441ba6688fe4"} Feb 17 01:00:58 crc kubenswrapper[4805]: I0217 01:00:58.787034 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:00:58 crc kubenswrapper[4805]: E0217 01:00:58.787600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.162180 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521501-l8z6t"] Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.163734 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.185866 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521501-l8z6t"] Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.257317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.257368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.257416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxw8\" (UniqueName: \"kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.257479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.360384 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.360653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.360697 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.360799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxw8\" (UniqueName: \"kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.368038 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.368140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.369848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.384125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxw8\" (UniqueName: \"kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8\") pod \"keystone-cron-29521501-l8z6t\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: I0217 01:01:00.500632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:00 crc kubenswrapper[4805]: W0217 01:01:00.997666 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c029f0d_d189_4126_8bfb_80fd5b1f1247.slice/crio-8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337 WatchSource:0}: Error finding container 8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337: Status 404 returned error can't find the container with id 8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337 Feb 17 01:01:01 crc kubenswrapper[4805]: I0217 01:01:01.001665 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521501-l8z6t"] Feb 17 01:01:01 crc kubenswrapper[4805]: I0217 01:01:01.751262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521501-l8z6t" event={"ID":"9c029f0d-d189-4126-8bfb-80fd5b1f1247","Type":"ContainerStarted","Data":"77beaad73c48e0593f5372b0c86a72920603a6dd4439fdfbaa91a9cb4cfc934a"} Feb 17 01:01:01 crc kubenswrapper[4805]: I0217 01:01:01.751676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521501-l8z6t" event={"ID":"9c029f0d-d189-4126-8bfb-80fd5b1f1247","Type":"ContainerStarted","Data":"8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337"} Feb 17 01:01:01 crc kubenswrapper[4805]: I0217 01:01:01.782600 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521501-l8z6t" podStartSLOduration=1.7825799039999999 podStartE2EDuration="1.782579904s" podCreationTimestamp="2026-02-17 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 01:01:01.772908289 +0000 UTC m=+2287.788717737" watchObservedRunningTime="2026-02-17 01:01:01.782579904 +0000 UTC m=+2287.798389302" Feb 17 01:01:03 crc kubenswrapper[4805]: E0217 01:01:03.787391 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:01:04 crc kubenswrapper[4805]: I0217 01:01:04.789005 4805 generic.go:334] "Generic (PLEG): container finished" podID="9c029f0d-d189-4126-8bfb-80fd5b1f1247" containerID="77beaad73c48e0593f5372b0c86a72920603a6dd4439fdfbaa91a9cb4cfc934a" exitCode=0 Feb 17 01:01:04 crc kubenswrapper[4805]: I0217 01:01:04.819104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521501-l8z6t" event={"ID":"9c029f0d-d189-4126-8bfb-80fd5b1f1247","Type":"ContainerDied","Data":"77beaad73c48e0593f5372b0c86a72920603a6dd4439fdfbaa91a9cb4cfc934a"} Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.230407 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.300269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys\") pod \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.300394 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blxw8\" (UniqueName: \"kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8\") pod \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.300536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle\") pod \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.300561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data\") pod \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\" (UID: \"9c029f0d-d189-4126-8bfb-80fd5b1f1247\") " Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.308966 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8" (OuterVolumeSpecName: "kube-api-access-blxw8") pod "9c029f0d-d189-4126-8bfb-80fd5b1f1247" (UID: "9c029f0d-d189-4126-8bfb-80fd5b1f1247"). InnerVolumeSpecName "kube-api-access-blxw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.310443 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9c029f0d-d189-4126-8bfb-80fd5b1f1247" (UID: "9c029f0d-d189-4126-8bfb-80fd5b1f1247"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.336115 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c029f0d-d189-4126-8bfb-80fd5b1f1247" (UID: "9c029f0d-d189-4126-8bfb-80fd5b1f1247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.358655 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data" (OuterVolumeSpecName: "config-data") pod "9c029f0d-d189-4126-8bfb-80fd5b1f1247" (UID: "9c029f0d-d189-4126-8bfb-80fd5b1f1247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.405388 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.405420 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.405429 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9c029f0d-d189-4126-8bfb-80fd5b1f1247-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.405437 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blxw8\" (UniqueName: \"kubernetes.io/projected/9c029f0d-d189-4126-8bfb-80fd5b1f1247-kube-api-access-blxw8\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.825961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521501-l8z6t" event={"ID":"9c029f0d-d189-4126-8bfb-80fd5b1f1247","Type":"ContainerDied","Data":"8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337"} Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.826004 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8332ea08dd136364a49a880dc76837d87d76e52ae66d4d29ca8f95790f6ec337" Feb 17 01:01:06 crc kubenswrapper[4805]: I0217 01:01:06.826176 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521501-l8z6t" Feb 17 01:01:07 crc kubenswrapper[4805]: E0217 01:01:07.802003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:01:13 crc kubenswrapper[4805]: I0217 01:01:13.785837 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:01:13 crc kubenswrapper[4805]: E0217 01:01:13.788533 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:01:13 crc kubenswrapper[4805]: I0217 01:01:13.941640 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerStarted","Data":"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7"} Feb 17 01:01:14 crc kubenswrapper[4805]: I0217 01:01:14.955089 4805 generic.go:334] "Generic (PLEG): container finished" podID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerID="d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7" exitCode=0 Feb 17 01:01:14 crc kubenswrapper[4805]: I0217 01:01:14.955156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerDied","Data":"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7"} Feb 17 01:01:15 crc kubenswrapper[4805]: I0217 01:01:15.966120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerStarted","Data":"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d"} Feb 17 01:01:15 crc kubenswrapper[4805]: I0217 01:01:15.968742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerStarted","Data":"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c"} Feb 17 01:01:15 crc kubenswrapper[4805]: I0217 01:01:15.990603 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48b6c" podStartSLOduration=2.300313347 podStartE2EDuration="22.990585128s" podCreationTimestamp="2026-02-17 01:00:53 +0000 UTC" firstStartedPulling="2026-02-17 01:00:54.672345 +0000 UTC m=+2280.688154398" lastFinishedPulling="2026-02-17 01:01:15.362616791 +0000 UTC m=+2301.378426179" observedRunningTime="2026-02-17 01:01:15.987932983 +0000 UTC m=+2302.003742381" watchObservedRunningTime="2026-02-17 01:01:15.990585128 +0000 UTC m=+2302.006394526" Feb 17 01:01:16 crc kubenswrapper[4805]: E0217 01:01:16.786749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:01:19 crc kubenswrapper[4805]: E0217 01:01:19.786698 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:01:21 crc kubenswrapper[4805]: I0217 01:01:21.036840 4805 generic.go:334] "Generic (PLEG): container finished" podID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerID="04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c" exitCode=0 Feb 17 01:01:21 crc kubenswrapper[4805]: I0217 01:01:21.036909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerDied","Data":"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c"} Feb 17 01:01:22 crc kubenswrapper[4805]: I0217 01:01:22.049945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerStarted","Data":"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42"} Feb 17 01:01:22 crc kubenswrapper[4805]: I0217 01:01:22.070252 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9v5t" podStartSLOduration=3.27739718 podStartE2EDuration="27.070231926s" podCreationTimestamp="2026-02-17 01:00:55 +0000 UTC" firstStartedPulling="2026-02-17 01:00:57.711119184 +0000 UTC m=+2283.726928572" lastFinishedPulling="2026-02-17 01:01:21.50395392 +0000 UTC m=+2307.519763318" observedRunningTime="2026-02-17 01:01:22.067947911 +0000 UTC m=+2308.083757309" watchObservedRunningTime="2026-02-17 01:01:22.070231926 +0000 UTC m=+2308.086041324" Feb 17 01:01:23 crc kubenswrapper[4805]: I0217 01:01:23.821861 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:23 crc kubenswrapper[4805]: I0217 01:01:23.823503 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:24 crc kubenswrapper[4805]: I0217 01:01:24.887191 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-48b6c" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="registry-server" probeResult="failure" output=< Feb 17 01:01:24 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:01:24 crc kubenswrapper[4805]: > Feb 17 01:01:26 crc kubenswrapper[4805]: I0217 01:01:26.244591 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:26 crc kubenswrapper[4805]: I0217 01:01:26.244958 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:27 crc kubenswrapper[4805]: I0217 01:01:27.350684 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9v5t" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="registry-server" probeResult="failure" output=< Feb 17 01:01:27 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:01:27 crc kubenswrapper[4805]: > Feb 17 01:01:27 crc kubenswrapper[4805]: I0217 01:01:27.784936 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:01:27 crc kubenswrapper[4805]: E0217 01:01:27.785270 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:01:29 crc kubenswrapper[4805]: E0217 01:01:29.787580 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:01:32 crc kubenswrapper[4805]: E0217 01:01:32.788083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:01:33 crc kubenswrapper[4805]: I0217 01:01:33.917353 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:34 crc kubenswrapper[4805]: I0217 01:01:34.014807 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:34 crc kubenswrapper[4805]: I0217 01:01:34.179048 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.209172 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-48b6c" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="registry-server" containerID="cri-o://a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d" gracePeriod=2 Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.800759 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.865536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities\") pod \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.865605 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content\") pod \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.865689 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99xv\" (UniqueName: \"kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv\") pod \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\" (UID: \"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7\") " Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.866768 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities" (OuterVolumeSpecName: "utilities") pod "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" (UID: "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.868693 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.877595 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv" (OuterVolumeSpecName: "kube-api-access-d99xv") pod "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" (UID: "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7"). InnerVolumeSpecName "kube-api-access-d99xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.923160 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" (UID: "a94e5bd0-3177-4d5d-969a-b5cd3daf94f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.970227 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99xv\" (UniqueName: \"kubernetes.io/projected/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-kube-api-access-d99xv\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:35 crc kubenswrapper[4805]: I0217 01:01:35.970262 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.225584 4805 generic.go:334] "Generic (PLEG): container finished" podID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerID="a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d" exitCode=0 Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.225821 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48b6c" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.225832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerDied","Data":"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d"} Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.225879 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48b6c" event={"ID":"a94e5bd0-3177-4d5d-969a-b5cd3daf94f7","Type":"ContainerDied","Data":"e448ff02618c0175069fe206400b32d5d609f07dc19242e6df6791337f326a72"} Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.225910 4805 scope.go:117] "RemoveContainer" containerID="a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.262055 4805 scope.go:117] "RemoveContainer" containerID="d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.280710 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.295742 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-48b6c"] Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.307045 4805 scope.go:117] "RemoveContainer" containerID="41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.358831 4805 scope.go:117] "RemoveContainer" containerID="a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d" Feb 17 01:01:36 crc kubenswrapper[4805]: E0217 01:01:36.359593 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d\": container with ID starting with a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d not found: ID does not exist" containerID="a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.359694 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d"} err="failed to get container status \"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d\": rpc error: code = NotFound desc = could not find container \"a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d\": container with ID starting with a695514f14e2baee9d0433aaea2943432aceeb1db254b3279344a6f5268bbb6d not found: ID does not exist" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.359752 4805 scope.go:117] "RemoveContainer" containerID="d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7" Feb 17 01:01:36 crc kubenswrapper[4805]: E0217 01:01:36.360903 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7\": container with ID starting with d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7 not found: ID does not exist" containerID="d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.360962 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7"} err="failed to get container status \"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7\": rpc error: code = NotFound desc = could not find container \"d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7\": container with ID starting with d20cdeb3f4dd00e5a5e68290609285ca4e7572a3a4eeb7109e15d4d4c59217c7 not found: ID does not exist" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.361004 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.361011 4805 scope.go:117] "RemoveContainer" containerID="41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1" Feb 17 01:01:36 crc kubenswrapper[4805]: E0217 01:01:36.361574 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1\": container with ID starting with 41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1 not found: ID does not exist" containerID="41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.361625 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1"} err="failed to get container status \"41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1\": rpc error: code = NotFound desc = could not find container \"41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1\": container with ID starting with 41945901876a4d783055176c053c17555969a9fa45106ecfce7e9399190e41a1 not found: ID does not exist" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.438390 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:36 crc kubenswrapper[4805]: I0217 01:01:36.809234 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" path="/var/lib/kubelet/pods/a94e5bd0-3177-4d5d-969a-b5cd3daf94f7/volumes" Feb 17 01:01:38 crc kubenswrapper[4805]: I0217 01:01:38.591269 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:01:38 crc kubenswrapper[4805]: I0217 01:01:38.592445 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9v5t" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="registry-server" containerID="cri-o://60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42" gracePeriod=2 Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.212372 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.284668 4805 generic.go:334] "Generic (PLEG): container finished" podID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerID="60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42" exitCode=0 Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.284725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerDied","Data":"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42"} Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.284751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9v5t" event={"ID":"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0","Type":"ContainerDied","Data":"7d9be11596d448e17084772ac3ac1a7783ca9a60ee975cb8a3b4441ba6688fe4"} Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.284767 4805 scope.go:117] "RemoveContainer" containerID="60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.284885 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9v5t" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.287976 4805 generic.go:334] "Generic (PLEG): container finished" podID="4a95c358-9f7f-42e7-b497-7f9f76dc01ce" containerID="d2a787b3bda38f9973c51e977f871419fac1280b03aac9caa2994136d9f4c38b" exitCode=0 Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.288015 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" event={"ID":"4a95c358-9f7f-42e7-b497-7f9f76dc01ce","Type":"ContainerDied","Data":"d2a787b3bda38f9973c51e977f871419fac1280b03aac9caa2994136d9f4c38b"} Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.307246 4805 scope.go:117] "RemoveContainer" containerID="04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.341439 4805 scope.go:117] "RemoveContainer" containerID="8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.359431 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content\") pod \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.359678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4ql\" (UniqueName: \"kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql\") pod \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.359763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities\") pod \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\" (UID: \"63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0\") " Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.360709 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities" (OuterVolumeSpecName: "utilities") pod "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" (UID: "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.365983 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql" (OuterVolumeSpecName: "kube-api-access-hv4ql") pod "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" (UID: "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0"). InnerVolumeSpecName "kube-api-access-hv4ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.438452 4805 scope.go:117] "RemoveContainer" containerID="60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42" Feb 17 01:01:39 crc kubenswrapper[4805]: E0217 01:01:39.439263 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42\": container with ID starting with 60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42 not found: ID does not exist" containerID="60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.439344 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42"} err="failed to get container status \"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42\": rpc error: code = NotFound desc = could not find container \"60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42\": container with ID starting with 60b959f90cdafd02fd8f50de31a0612fbc350b6c8d2c503c8366d96e5f172b42 not found: ID does not exist" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.439381 4805 scope.go:117] "RemoveContainer" containerID="04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c" Feb 17 01:01:39 crc kubenswrapper[4805]: E0217 01:01:39.439810 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c\": container with ID starting with 04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c not found: ID does not exist" containerID="04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.439976 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c"} err="failed to get container status \"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c\": rpc error: code = NotFound desc = could not find container \"04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c\": container with ID starting with 04c8a8f92ab701039782b73fb841bb7294fea07e6a7cc6cad25462cf57a9d17c not found: ID does not exist" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.440184 4805 scope.go:117] "RemoveContainer" containerID="8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930" Feb 17 01:01:39 crc kubenswrapper[4805]: E0217 01:01:39.440660 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930\": container with ID starting with 8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930 not found: ID does not exist" containerID="8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.440694 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930"} err="failed to get container status \"8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930\": rpc error: code = NotFound desc = could not find container \"8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930\": container with ID starting with 8a0279ff45343920daea52f17a4ed31fec32175ef8a2c7a29d5b6f17622e7930 not found: ID does not exist" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.463159 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv4ql\" (UniqueName: \"kubernetes.io/projected/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-kube-api-access-hv4ql\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.463202 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.492017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" (UID: "63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.565061 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.642366 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:01:39 crc kubenswrapper[4805]: I0217 01:01:39.658276 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9v5t"] Feb 17 01:01:40 crc kubenswrapper[4805]: E0217 01:01:40.787508 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.797348 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" path="/var/lib/kubelet/pods/63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0/volumes" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.818691 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.894503 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam\") pod \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.894599 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0\") pod \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.894808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle\") pod \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.894871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory\") pod \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.895069 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4d8l\" (UniqueName: \"kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l\") pod \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\" (UID: \"4a95c358-9f7f-42e7-b497-7f9f76dc01ce\") " Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.903446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l" (OuterVolumeSpecName: "kube-api-access-t4d8l") pod "4a95c358-9f7f-42e7-b497-7f9f76dc01ce" (UID: "4a95c358-9f7f-42e7-b497-7f9f76dc01ce"). InnerVolumeSpecName "kube-api-access-t4d8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.908640 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4a95c358-9f7f-42e7-b497-7f9f76dc01ce" (UID: "4a95c358-9f7f-42e7-b497-7f9f76dc01ce"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.924517 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4a95c358-9f7f-42e7-b497-7f9f76dc01ce" (UID: "4a95c358-9f7f-42e7-b497-7f9f76dc01ce"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.933973 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory" (OuterVolumeSpecName: "inventory") pod "4a95c358-9f7f-42e7-b497-7f9f76dc01ce" (UID: "4a95c358-9f7f-42e7-b497-7f9f76dc01ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.956352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a95c358-9f7f-42e7-b497-7f9f76dc01ce" (UID: "4a95c358-9f7f-42e7-b497-7f9f76dc01ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.997816 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.997856 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.997869 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.997882 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:40 crc kubenswrapper[4805]: I0217 01:01:40.997895 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4d8l\" (UniqueName: \"kubernetes.io/projected/4a95c358-9f7f-42e7-b497-7f9f76dc01ce-kube-api-access-t4d8l\") on node \"crc\" DevicePath \"\"" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.319791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" event={"ID":"4a95c358-9f7f-42e7-b497-7f9f76dc01ce","Type":"ContainerDied","Data":"e070247f14c9fcbdba5634a6710edef15ec3f07ab95ddae9e5a142cb270ad52e"} Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.319864 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86ss7" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.319868 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e070247f14c9fcbdba5634a6710edef15ec3f07ab95ddae9e5a142cb270ad52e" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.438714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524"] Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439357 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a95c358-9f7f-42e7-b497-7f9f76dc01ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439388 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a95c358-9f7f-42e7-b497-7f9f76dc01ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439422 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439436 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439453 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="extract-content" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439464 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="extract-content" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439492 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c029f0d-d189-4126-8bfb-80fd5b1f1247" containerName="keystone-cron" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439503 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c029f0d-d189-4126-8bfb-80fd5b1f1247" containerName="keystone-cron" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439518 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="extract-content" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439528 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="extract-content" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439554 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439597 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="extract-utilities" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439609 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="extract-utilities" Feb 17 01:01:41 crc kubenswrapper[4805]: E0217 01:01:41.439629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="extract-utilities" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439640 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="extract-utilities" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439952 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94e5bd0-3177-4d5d-969a-b5cd3daf94f7" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.439984 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c029f0d-d189-4126-8bfb-80fd5b1f1247" containerName="keystone-cron" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.440010 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a95c358-9f7f-42e7-b497-7f9f76dc01ce" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.440169 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c9b05a-45f7-4e28-9b9f-d8afa88a8ef0" containerName="registry-server" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.442069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.448209 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.448514 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.448728 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.449137 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.449371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.466027 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524"] Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h725\" (UniqueName: \"kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508703 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508884 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.508940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610480 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610571 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h725\" (UniqueName: \"kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.610680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.615034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.616034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.616439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.616506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.618448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.619097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.630972 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h725\" (UniqueName: \"kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8524\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:41 crc kubenswrapper[4805]: I0217 01:01:41.766655 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:01:42 crc kubenswrapper[4805]: I0217 01:01:42.427986 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524"] Feb 17 01:01:42 crc kubenswrapper[4805]: I0217 01:01:42.784918 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:01:42 crc kubenswrapper[4805]: E0217 01:01:42.785256 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:01:43 crc kubenswrapper[4805]: I0217 01:01:43.343534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" event={"ID":"f91e2557-4edd-4cab-ae36-dce0f28acbb0","Type":"ContainerStarted","Data":"79ff656eee0a11f41ef5efc587bc658a51df46ab6254239b1ce45a078c075007"} Feb 17 01:01:43 crc kubenswrapper[4805]: I0217 01:01:43.343805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" event={"ID":"f91e2557-4edd-4cab-ae36-dce0f28acbb0","Type":"ContainerStarted","Data":"e806d808c3ad9d25e57a72bf06b13833a0ea820bbf41fb50258b35b9b7a5171e"} Feb 17 01:01:43 crc kubenswrapper[4805]: I0217 01:01:43.368836 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" podStartSLOduration=1.950519951 podStartE2EDuration="2.368818556s" podCreationTimestamp="2026-02-17 01:01:41 +0000 UTC" firstStartedPulling="2026-02-17 01:01:42.43474742 +0000 UTC m=+2328.450556848" lastFinishedPulling="2026-02-17 01:01:42.853046045 +0000 UTC m=+2328.868855453" observedRunningTime="2026-02-17 01:01:43.366241803 +0000 UTC m=+2329.382051201" watchObservedRunningTime="2026-02-17 01:01:43.368818556 +0000 UTC m=+2329.384627954" Feb 17 01:01:43 crc kubenswrapper[4805]: E0217 01:01:43.787946 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:01:52 crc kubenswrapper[4805]: E0217 01:01:52.787778 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:01:54 crc kubenswrapper[4805]: E0217 01:01:54.799255 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:01:56 crc kubenswrapper[4805]: I0217 01:01:56.784782 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:01:56 crc kubenswrapper[4805]: E0217 01:01:56.785742 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:02:04 crc kubenswrapper[4805]: E0217 01:02:04.806635 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:02:08 crc kubenswrapper[4805]: I0217 01:02:08.785568 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:02:08 crc kubenswrapper[4805]: E0217 01:02:08.786692 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:02:09 crc kubenswrapper[4805]: E0217 01:02:09.789231 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:02:19 crc kubenswrapper[4805]: E0217 01:02:19.785444 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:02:22 crc kubenswrapper[4805]: I0217 01:02:22.784891 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:02:22 crc kubenswrapper[4805]: E0217 01:02:22.785832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:02:23 crc kubenswrapper[4805]: E0217 01:02:23.787181 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:02:32 crc kubenswrapper[4805]: E0217 01:02:32.788475 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:02:36 crc kubenswrapper[4805]: I0217 01:02:36.785260 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:02:36 crc kubenswrapper[4805]: E0217 01:02:36.786045 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:02:36 crc kubenswrapper[4805]: E0217 01:02:36.788988 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:02:39 crc kubenswrapper[4805]: I0217 01:02:39.008321 4805 generic.go:334] "Generic (PLEG): container finished" podID="f91e2557-4edd-4cab-ae36-dce0f28acbb0" containerID="79ff656eee0a11f41ef5efc587bc658a51df46ab6254239b1ce45a078c075007" exitCode=2 Feb 17 01:02:39 crc kubenswrapper[4805]: I0217 01:02:39.008801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" event={"ID":"f91e2557-4edd-4cab-ae36-dce0f28acbb0","Type":"ContainerDied","Data":"79ff656eee0a11f41ef5efc587bc658a51df46ab6254239b1ce45a078c075007"} Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.573390 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.713663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714013 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714092 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h725\" (UniqueName: \"kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714292 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.714525 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0\") pod \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\" (UID: \"f91e2557-4edd-4cab-ae36-dce0f28acbb0\") " Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.721854 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.724745 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725" (OuterVolumeSpecName: "kube-api-access-5h725") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "kube-api-access-5h725". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.759823 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.763548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.766612 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory" (OuterVolumeSpecName: "inventory") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.769514 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.770341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f91e2557-4edd-4cab-ae36-dce0f28acbb0" (UID: "f91e2557-4edd-4cab-ae36-dce0f28acbb0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817711 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817756 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817775 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h725\" (UniqueName: \"kubernetes.io/projected/f91e2557-4edd-4cab-ae36-dce0f28acbb0-kube-api-access-5h725\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817794 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817810 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817827 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:40 crc kubenswrapper[4805]: I0217 01:02:40.817844 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f91e2557-4edd-4cab-ae36-dce0f28acbb0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:02:41 crc kubenswrapper[4805]: I0217 01:02:41.034402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" event={"ID":"f91e2557-4edd-4cab-ae36-dce0f28acbb0","Type":"ContainerDied","Data":"e806d808c3ad9d25e57a72bf06b13833a0ea820bbf41fb50258b35b9b7a5171e"} Feb 17 01:02:41 crc kubenswrapper[4805]: I0217 01:02:41.034469 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e806d808c3ad9d25e57a72bf06b13833a0ea820bbf41fb50258b35b9b7a5171e" Feb 17 01:02:41 crc kubenswrapper[4805]: I0217 01:02:41.034494 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8524" Feb 17 01:02:45 crc kubenswrapper[4805]: E0217 01:02:45.788696 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.038933 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b"] Feb 17 01:02:48 crc kubenswrapper[4805]: E0217 01:02:48.039964 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91e2557-4edd-4cab-ae36-dce0f28acbb0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.039985 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91e2557-4edd-4cab-ae36-dce0f28acbb0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.040259 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91e2557-4edd-4cab-ae36-dce0f28acbb0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.041114 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.044897 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.044969 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.045425 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.045622 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.045799 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.074319 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b"] Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.203969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.204626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xcq\" (UniqueName: \"kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.306807 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.306900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.306979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.307041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.307096 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xcq\" (UniqueName: \"kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.307264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.307408 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.317015 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.317393 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.318248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.319222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.320068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.323219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.341575 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xcq\" (UniqueName: \"kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:48 crc kubenswrapper[4805]: I0217 01:02:48.370551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:02:49 crc kubenswrapper[4805]: I0217 01:02:49.025481 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b"] Feb 17 01:02:49 crc kubenswrapper[4805]: I0217 01:02:49.123451 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" event={"ID":"7f69bd70-7951-4978-ad8e-dea9637e476a","Type":"ContainerStarted","Data":"022f7d8ac1b36450f3bddc43a7d6d658361c8505229e513d08e6e263e89cef78"} Feb 17 01:02:49 crc kubenswrapper[4805]: E0217 01:02:49.786820 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:02:50 crc kubenswrapper[4805]: I0217 01:02:50.140819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" event={"ID":"7f69bd70-7951-4978-ad8e-dea9637e476a","Type":"ContainerStarted","Data":"a613a5633d6e4ebe0b39c0387524b86e88f5cc0abb8075e07d9d80b3ca1c2615"} Feb 17 01:02:50 crc kubenswrapper[4805]: I0217 01:02:50.179912 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" podStartSLOduration=1.7368872309999999 podStartE2EDuration="2.179895448s" podCreationTimestamp="2026-02-17 01:02:48 +0000 UTC" firstStartedPulling="2026-02-17 01:02:49.028285167 +0000 UTC m=+2395.044094565" lastFinishedPulling="2026-02-17 01:02:49.471293384 +0000 UTC m=+2395.487102782" observedRunningTime="2026-02-17 01:02:50.170966915 +0000 UTC m=+2396.186776343" watchObservedRunningTime="2026-02-17 01:02:50.179895448 +0000 UTC m=+2396.195704846" Feb 17 01:02:51 crc kubenswrapper[4805]: I0217 01:02:51.785545 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:02:51 crc kubenswrapper[4805]: E0217 01:02:51.788169 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:00 crc kubenswrapper[4805]: E0217 01:03:00.788052 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:03:04 crc kubenswrapper[4805]: E0217 01:03:04.812477 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:03:06 crc kubenswrapper[4805]: I0217 01:03:06.784513 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:03:06 crc kubenswrapper[4805]: E0217 01:03:06.785467 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:15 crc kubenswrapper[4805]: E0217 01:03:15.788483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:03:17 crc kubenswrapper[4805]: E0217 01:03:17.786777 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:03:20 crc kubenswrapper[4805]: I0217 01:03:20.784550 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:03:20 crc kubenswrapper[4805]: E0217 01:03:20.786205 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:26 crc kubenswrapper[4805]: I0217 01:03:26.787262 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:03:26 crc kubenswrapper[4805]: E0217 01:03:26.911067 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:03:26 crc kubenswrapper[4805]: E0217 01:03:26.911794 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:03:26 crc kubenswrapper[4805]: E0217 01:03:26.912031 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:03:26 crc kubenswrapper[4805]: E0217 01:03:26.913274 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:03:29 crc kubenswrapper[4805]: E0217 01:03:29.914976 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:03:29 crc kubenswrapper[4805]: E0217 01:03:29.915305 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:03:29 crc kubenswrapper[4805]: E0217 01:03:29.915490 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:03:29 crc kubenswrapper[4805]: E0217 01:03:29.916651 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:03:31 crc kubenswrapper[4805]: I0217 01:03:31.785560 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:03:31 crc kubenswrapper[4805]: E0217 01:03:31.786306 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:38 crc kubenswrapper[4805]: E0217 01:03:38.787227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:03:42 crc kubenswrapper[4805]: I0217 01:03:42.812871 4805 generic.go:334] "Generic (PLEG): container finished" podID="7f69bd70-7951-4978-ad8e-dea9637e476a" containerID="a613a5633d6e4ebe0b39c0387524b86e88f5cc0abb8075e07d9d80b3ca1c2615" exitCode=2 Feb 17 01:03:42 crc kubenswrapper[4805]: I0217 01:03:42.813015 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" event={"ID":"7f69bd70-7951-4978-ad8e-dea9637e476a","Type":"ContainerDied","Data":"a613a5633d6e4ebe0b39c0387524b86e88f5cc0abb8075e07d9d80b3ca1c2615"} Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.421871 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529654 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98xcq\" (UniqueName: \"kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529867 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.529970 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.530001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam\") pod \"7f69bd70-7951-4978-ad8e-dea9637e476a\" (UID: \"7f69bd70-7951-4978-ad8e-dea9637e476a\") " Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.537513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq" (OuterVolumeSpecName: "kube-api-access-98xcq") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "kube-api-access-98xcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.540168 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.564886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.579476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory" (OuterVolumeSpecName: "inventory") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.584758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.586230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.590913 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7f69bd70-7951-4978-ad8e-dea9637e476a" (UID: "7f69bd70-7951-4978-ad8e-dea9637e476a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632752 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632811 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632835 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98xcq\" (UniqueName: \"kubernetes.io/projected/7f69bd70-7951-4978-ad8e-dea9637e476a-kube-api-access-98xcq\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632854 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632874 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632891 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.632909 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f69bd70-7951-4978-ad8e-dea9637e476a-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.798267 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:03:44 crc kubenswrapper[4805]: E0217 01:03:44.798858 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:44 crc kubenswrapper[4805]: E0217 01:03:44.800209 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.842113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" event={"ID":"7f69bd70-7951-4978-ad8e-dea9637e476a","Type":"ContainerDied","Data":"022f7d8ac1b36450f3bddc43a7d6d658361c8505229e513d08e6e263e89cef78"} Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.842416 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="022f7d8ac1b36450f3bddc43a7d6d658361c8505229e513d08e6e263e89cef78" Feb 17 01:03:44 crc kubenswrapper[4805]: I0217 01:03:44.842194 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b" Feb 17 01:03:49 crc kubenswrapper[4805]: E0217 01:03:49.787606 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:03:56 crc kubenswrapper[4805]: I0217 01:03:56.784745 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:03:56 crc kubenswrapper[4805]: E0217 01:03:56.785842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:03:58 crc kubenswrapper[4805]: E0217 01:03:58.791540 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:04:00 crc kubenswrapper[4805]: E0217 01:04:00.788749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.058815 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh"] Feb 17 01:04:02 crc kubenswrapper[4805]: E0217 01:04:02.061198 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f69bd70-7951-4978-ad8e-dea9637e476a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.061403 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f69bd70-7951-4978-ad8e-dea9637e476a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.061990 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f69bd70-7951-4978-ad8e-dea9637e476a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.063640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.069292 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.069594 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.069709 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.069927 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.069970 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.108536 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh"] Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157202 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157635 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hx2k\" (UniqueName: \"kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157699 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.157855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.158040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hx2k\" (UniqueName: \"kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.260770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.269562 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.270515 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.270817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.271732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.273007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.284222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.297280 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hx2k\" (UniqueName: \"kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rttgh\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:02 crc kubenswrapper[4805]: I0217 01:04:02.407958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:03 crc kubenswrapper[4805]: I0217 01:04:03.055658 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh"] Feb 17 01:04:03 crc kubenswrapper[4805]: I0217 01:04:03.093565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" event={"ID":"2f4b1196-c56e-477f-93ac-a1911fe564ef","Type":"ContainerStarted","Data":"a46015dbd7484f97c2b6bf1e4d58c6024d3932c76a3f2112007763d71f1db2b3"} Feb 17 01:04:04 crc kubenswrapper[4805]: I0217 01:04:04.110588 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" event={"ID":"2f4b1196-c56e-477f-93ac-a1911fe564ef","Type":"ContainerStarted","Data":"e82a8abb8bf219beef93404461ace8788e2fc4f6555cc969e6791c442835dc4c"} Feb 17 01:04:04 crc kubenswrapper[4805]: I0217 01:04:04.133805 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" podStartSLOduration=1.681993266 podStartE2EDuration="2.133777531s" podCreationTimestamp="2026-02-17 01:04:02 +0000 UTC" firstStartedPulling="2026-02-17 01:04:03.053592736 +0000 UTC m=+2469.069402154" lastFinishedPulling="2026-02-17 01:04:03.505377001 +0000 UTC m=+2469.521186419" observedRunningTime="2026-02-17 01:04:04.132619918 +0000 UTC m=+2470.148429336" watchObservedRunningTime="2026-02-17 01:04:04.133777531 +0000 UTC m=+2470.149586969" Feb 17 01:04:08 crc kubenswrapper[4805]: I0217 01:04:08.785513 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:04:08 crc kubenswrapper[4805]: E0217 01:04:08.786755 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:04:10 crc kubenswrapper[4805]: E0217 01:04:10.788781 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:04:15 crc kubenswrapper[4805]: E0217 01:04:15.788407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:04:21 crc kubenswrapper[4805]: I0217 01:04:21.785381 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:04:21 crc kubenswrapper[4805]: E0217 01:04:21.786007 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:04:24 crc kubenswrapper[4805]: E0217 01:04:24.806145 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:04:29 crc kubenswrapper[4805]: E0217 01:04:29.788234 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:04:34 crc kubenswrapper[4805]: I0217 01:04:34.791700 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:04:35 crc kubenswrapper[4805]: I0217 01:04:35.525430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3"} Feb 17 01:04:37 crc kubenswrapper[4805]: E0217 01:04:37.788385 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:04:43 crc kubenswrapper[4805]: E0217 01:04:43.787618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:04:51 crc kubenswrapper[4805]: E0217 01:04:51.789486 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:04:55 crc kubenswrapper[4805]: I0217 01:04:55.758866 4805 generic.go:334] "Generic (PLEG): container finished" podID="2f4b1196-c56e-477f-93ac-a1911fe564ef" containerID="e82a8abb8bf219beef93404461ace8788e2fc4f6555cc969e6791c442835dc4c" exitCode=2 Feb 17 01:04:55 crc kubenswrapper[4805]: I0217 01:04:55.758961 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" event={"ID":"2f4b1196-c56e-477f-93ac-a1911fe564ef","Type":"ContainerDied","Data":"e82a8abb8bf219beef93404461ace8788e2fc4f6555cc969e6791c442835dc4c"} Feb 17 01:04:55 crc kubenswrapper[4805]: E0217 01:04:55.787566 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.256213 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.342722 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343152 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hx2k\" (UniqueName: \"kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343224 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.343360 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle\") pod \"2f4b1196-c56e-477f-93ac-a1911fe564ef\" (UID: \"2f4b1196-c56e-477f-93ac-a1911fe564ef\") " Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.363269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k" (OuterVolumeSpecName: "kube-api-access-8hx2k") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "kube-api-access-8hx2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.365939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.371510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.373124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory" (OuterVolumeSpecName: "inventory") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.374441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.381798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.382301 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2f4b1196-c56e-477f-93ac-a1911fe564ef" (UID: "2f4b1196-c56e-477f-93ac-a1911fe564ef"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445514 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445538 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445548 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hx2k\" (UniqueName: \"kubernetes.io/projected/2f4b1196-c56e-477f-93ac-a1911fe564ef-kube-api-access-8hx2k\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445557 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445566 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445574 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.445582 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f4b1196-c56e-477f-93ac-a1911fe564ef-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.791262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" event={"ID":"2f4b1196-c56e-477f-93ac-a1911fe564ef","Type":"ContainerDied","Data":"a46015dbd7484f97c2b6bf1e4d58c6024d3932c76a3f2112007763d71f1db2b3"} Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.791351 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46015dbd7484f97c2b6bf1e4d58c6024d3932c76a3f2112007763d71f1db2b3" Feb 17 01:04:57 crc kubenswrapper[4805]: I0217 01:04:57.791437 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rttgh" Feb 17 01:05:06 crc kubenswrapper[4805]: E0217 01:05:06.789182 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:05:10 crc kubenswrapper[4805]: E0217 01:05:10.790113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:05:21 crc kubenswrapper[4805]: E0217 01:05:21.786954 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:05:21 crc kubenswrapper[4805]: E0217 01:05:21.787016 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:05:33 crc kubenswrapper[4805]: E0217 01:05:33.787851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.044523 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t"] Feb 17 01:05:35 crc kubenswrapper[4805]: E0217 01:05:35.045669 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4b1196-c56e-477f-93ac-a1911fe564ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.045698 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4b1196-c56e-477f-93ac-a1911fe564ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.046101 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4b1196-c56e-477f-93ac-a1911fe564ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.047373 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.049912 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.050501 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.050927 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.052365 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.053877 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.071181 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t"] Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.102925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.103177 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9r9\" (UniqueName: \"kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.103241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.103397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.103537 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.103746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.104042 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.205904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9r9\" (UniqueName: \"kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.205977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.206048 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.206118 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.206251 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.206341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.206562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.211995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.212051 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.212522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.213644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.215019 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.216568 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.227195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9r9\" (UniqueName: \"kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: I0217 01:05:35.374219 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:05:35 crc kubenswrapper[4805]: E0217 01:05:35.791999 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:05:36 crc kubenswrapper[4805]: I0217 01:05:36.024870 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t"] Feb 17 01:05:36 crc kubenswrapper[4805]: I0217 01:05:36.334596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" event={"ID":"14e189b9-6c07-4b19-aba3-9f357bfa7639","Type":"ContainerStarted","Data":"fafd09d76f437b12947f9c639a4355094fb9386c8a538842bd3169594de45596"} Feb 17 01:05:37 crc kubenswrapper[4805]: I0217 01:05:37.343139 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" event={"ID":"14e189b9-6c07-4b19-aba3-9f357bfa7639","Type":"ContainerStarted","Data":"bd548beed5aa95f33d413602e55b6af83c0527b279b83a2a9bd7b4e57f0b9b76"} Feb 17 01:05:37 crc kubenswrapper[4805]: I0217 01:05:37.361233 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" podStartSLOduration=1.866253324 podStartE2EDuration="2.361206841s" podCreationTimestamp="2026-02-17 01:05:35 +0000 UTC" firstStartedPulling="2026-02-17 01:05:36.030424558 +0000 UTC m=+2562.046233966" lastFinishedPulling="2026-02-17 01:05:36.525378055 +0000 UTC m=+2562.541187483" observedRunningTime="2026-02-17 01:05:37.358775483 +0000 UTC m=+2563.374584891" watchObservedRunningTime="2026-02-17 01:05:37.361206841 +0000 UTC m=+2563.377016259" Feb 17 01:05:47 crc kubenswrapper[4805]: E0217 01:05:47.786910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:05:50 crc kubenswrapper[4805]: E0217 01:05:50.787709 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:06:00 crc kubenswrapper[4805]: E0217 01:06:00.788864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:06:02 crc kubenswrapper[4805]: E0217 01:06:02.788011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:06:13 crc kubenswrapper[4805]: E0217 01:06:13.788691 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:06:14 crc kubenswrapper[4805]: E0217 01:06:14.797773 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:06:27 crc kubenswrapper[4805]: E0217 01:06:27.788274 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:06:27 crc kubenswrapper[4805]: I0217 01:06:27.996204 4805 generic.go:334] "Generic (PLEG): container finished" podID="14e189b9-6c07-4b19-aba3-9f357bfa7639" containerID="bd548beed5aa95f33d413602e55b6af83c0527b279b83a2a9bd7b4e57f0b9b76" exitCode=2 Feb 17 01:06:27 crc kubenswrapper[4805]: I0217 01:06:27.996243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" event={"ID":"14e189b9-6c07-4b19-aba3-9f357bfa7639","Type":"ContainerDied","Data":"bd548beed5aa95f33d413602e55b6af83c0527b279b83a2a9bd7b4e57f0b9b76"} Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.502928 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.691929 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9r9\" (UniqueName: \"kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692242 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692438 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.692520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0\") pod \"14e189b9-6c07-4b19-aba3-9f357bfa7639\" (UID: \"14e189b9-6c07-4b19-aba3-9f357bfa7639\") " Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.699201 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.703298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9" (OuterVolumeSpecName: "kube-api-access-ks9r9") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "kube-api-access-ks9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.725168 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.737595 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.738131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory" (OuterVolumeSpecName: "inventory") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.742251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.750061 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "14e189b9-6c07-4b19-aba3-9f357bfa7639" (UID: "14e189b9-6c07-4b19-aba3-9f357bfa7639"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:06:29 crc kubenswrapper[4805]: E0217 01:06:29.786388 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794764 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794798 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794814 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794828 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9r9\" (UniqueName: \"kubernetes.io/projected/14e189b9-6c07-4b19-aba3-9f357bfa7639-kube-api-access-ks9r9\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794842 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794854 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:29 crc kubenswrapper[4805]: I0217 01:06:29.794866 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14e189b9-6c07-4b19-aba3-9f357bfa7639-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:06:30 crc kubenswrapper[4805]: I0217 01:06:30.028287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" event={"ID":"14e189b9-6c07-4b19-aba3-9f357bfa7639","Type":"ContainerDied","Data":"fafd09d76f437b12947f9c639a4355094fb9386c8a538842bd3169594de45596"} Feb 17 01:06:30 crc kubenswrapper[4805]: I0217 01:06:30.028370 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fafd09d76f437b12947f9c639a4355094fb9386c8a538842bd3169594de45596" Feb 17 01:06:30 crc kubenswrapper[4805]: I0217 01:06:30.028451 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t" Feb 17 01:06:41 crc kubenswrapper[4805]: E0217 01:06:41.789775 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:06:42 crc kubenswrapper[4805]: E0217 01:06:42.787597 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:06:52 crc kubenswrapper[4805]: E0217 01:06:52.788811 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:06:53 crc kubenswrapper[4805]: I0217 01:06:53.077543 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:06:53 crc kubenswrapper[4805]: I0217 01:06:53.077956 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:06:57 crc kubenswrapper[4805]: E0217 01:06:57.787806 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:07:06 crc kubenswrapper[4805]: E0217 01:07:06.790601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:07:10 crc kubenswrapper[4805]: E0217 01:07:10.787595 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:07:21 crc kubenswrapper[4805]: E0217 01:07:21.788842 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:07:23 crc kubenswrapper[4805]: I0217 01:07:23.077429 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:07:23 crc kubenswrapper[4805]: I0217 01:07:23.077758 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:07:25 crc kubenswrapper[4805]: E0217 01:07:25.788263 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:07:36 crc kubenswrapper[4805]: E0217 01:07:36.788572 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:07:40 crc kubenswrapper[4805]: E0217 01:07:40.790071 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.058821 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp"] Feb 17 01:07:47 crc kubenswrapper[4805]: E0217 01:07:47.060357 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e189b9-6c07-4b19-aba3-9f357bfa7639" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.060395 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e189b9-6c07-4b19-aba3-9f357bfa7639" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.060834 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e189b9-6c07-4b19-aba3-9f357bfa7639" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.062218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.069260 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.079029 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.079293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.079688 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.079879 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.089573 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp"] Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.188850 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189223 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqqh\" (UniqueName: \"kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.189459 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290658 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290781 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290829 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqqh\" (UniqueName: \"kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.290926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.296135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.296716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.296732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.299458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.308556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.309723 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.311539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqqh\" (UniqueName: \"kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:47 crc kubenswrapper[4805]: I0217 01:07:47.399550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:07:48 crc kubenswrapper[4805]: I0217 01:07:48.032086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp"] Feb 17 01:07:48 crc kubenswrapper[4805]: I0217 01:07:48.120875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" event={"ID":"ed5ab321-ffbb-45a2-8cec-03034de09b60","Type":"ContainerStarted","Data":"4e4eb2b6ee2e2a0a8938a55ee4dda136e610e27e63ffe4488320292891b637b2"} Feb 17 01:07:48 crc kubenswrapper[4805]: E0217 01:07:48.787119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:07:49 crc kubenswrapper[4805]: I0217 01:07:49.136302 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" event={"ID":"ed5ab321-ffbb-45a2-8cec-03034de09b60","Type":"ContainerStarted","Data":"eed2c52d060ede7d4b04f12b5a5473ce3b07fef53801d7f73424e4f9e702a9a6"} Feb 17 01:07:49 crc kubenswrapper[4805]: I0217 01:07:49.173219 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" podStartSLOduration=1.7394561419999999 podStartE2EDuration="2.173193773s" podCreationTimestamp="2026-02-17 01:07:47 +0000 UTC" firstStartedPulling="2026-02-17 01:07:48.03157127 +0000 UTC m=+2694.047380678" lastFinishedPulling="2026-02-17 01:07:48.465308871 +0000 UTC m=+2694.481118309" observedRunningTime="2026-02-17 01:07:49.162453722 +0000 UTC m=+2695.178263130" watchObservedRunningTime="2026-02-17 01:07:49.173193773 +0000 UTC m=+2695.189003211" Feb 17 01:07:52 crc kubenswrapper[4805]: E0217 01:07:52.789907 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:07:53 crc kubenswrapper[4805]: I0217 01:07:53.076686 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:07:53 crc kubenswrapper[4805]: I0217 01:07:53.076766 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:07:53 crc kubenswrapper[4805]: I0217 01:07:53.076824 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:07:53 crc kubenswrapper[4805]: I0217 01:07:53.077994 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:07:53 crc kubenswrapper[4805]: I0217 01:07:53.078130 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3" gracePeriod=600 Feb 17 01:07:54 crc kubenswrapper[4805]: I0217 01:07:54.201371 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3" exitCode=0 Feb 17 01:07:54 crc kubenswrapper[4805]: I0217 01:07:54.201414 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3"} Feb 17 01:07:54 crc kubenswrapper[4805]: I0217 01:07:54.201876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba"} Feb 17 01:07:54 crc kubenswrapper[4805]: I0217 01:07:54.201899 4805 scope.go:117] "RemoveContainer" containerID="8e5affb62a0fdfeddd8d6e8546befeaff954c013d3f1eac8282ce02a5c78a13e" Feb 17 01:08:02 crc kubenswrapper[4805]: E0217 01:08:02.805006 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:08:05 crc kubenswrapper[4805]: E0217 01:08:05.787224 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:08:16 crc kubenswrapper[4805]: E0217 01:08:16.788361 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:08:20 crc kubenswrapper[4805]: E0217 01:08:20.788758 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:08:28 crc kubenswrapper[4805]: E0217 01:08:28.786878 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:08:34 crc kubenswrapper[4805]: I0217 01:08:34.799964 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:08:34 crc kubenswrapper[4805]: E0217 01:08:34.916452 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:08:34 crc kubenswrapper[4805]: E0217 01:08:34.916527 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:08:34 crc kubenswrapper[4805]: E0217 01:08:34.916681 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:08:34 crc kubenswrapper[4805]: E0217 01:08:34.918157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:08:40 crc kubenswrapper[4805]: I0217 01:08:40.729571 4805 generic.go:334] "Generic (PLEG): container finished" podID="ed5ab321-ffbb-45a2-8cec-03034de09b60" containerID="eed2c52d060ede7d4b04f12b5a5473ce3b07fef53801d7f73424e4f9e702a9a6" exitCode=2 Feb 17 01:08:40 crc kubenswrapper[4805]: I0217 01:08:40.729677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" event={"ID":"ed5ab321-ffbb-45a2-8cec-03034de09b60","Type":"ContainerDied","Data":"eed2c52d060ede7d4b04f12b5a5473ce3b07fef53801d7f73424e4f9e702a9a6"} Feb 17 01:08:41 crc kubenswrapper[4805]: E0217 01:08:41.919777 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:08:41 crc kubenswrapper[4805]: E0217 01:08:41.919836 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:08:41 crc kubenswrapper[4805]: E0217 01:08:41.919989 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:08:41 crc kubenswrapper[4805]: E0217 01:08:41.921228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.390356 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.507897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508024 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508533 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wqqh\" (UniqueName: \"kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.508732 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0\") pod \"ed5ab321-ffbb-45a2-8cec-03034de09b60\" (UID: \"ed5ab321-ffbb-45a2-8cec-03034de09b60\") " Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.514416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.547715 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.549351 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh" (OuterVolumeSpecName: "kube-api-access-9wqqh") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "kube-api-access-9wqqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.571942 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory" (OuterVolumeSpecName: "inventory") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.587459 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.594562 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.594997 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ed5ab321-ffbb-45a2-8cec-03034de09b60" (UID: "ed5ab321-ffbb-45a2-8cec-03034de09b60"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.615916 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.615953 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wqqh\" (UniqueName: \"kubernetes.io/projected/ed5ab321-ffbb-45a2-8cec-03034de09b60-kube-api-access-9wqqh\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.615969 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.615984 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.615996 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.616009 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.616020 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ed5ab321-ffbb-45a2-8cec-03034de09b60-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.755578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" event={"ID":"ed5ab321-ffbb-45a2-8cec-03034de09b60","Type":"ContainerDied","Data":"4e4eb2b6ee2e2a0a8938a55ee4dda136e610e27e63ffe4488320292891b637b2"} Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.755637 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4eb2b6ee2e2a0a8938a55ee4dda136e610e27e63ffe4488320292891b637b2" Feb 17 01:08:42 crc kubenswrapper[4805]: I0217 01:08:42.755677 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp" Feb 17 01:08:47 crc kubenswrapper[4805]: E0217 01:08:47.787858 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:08:53 crc kubenswrapper[4805]: E0217 01:08:53.787528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:08:58 crc kubenswrapper[4805]: E0217 01:08:58.789737 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:09:07 crc kubenswrapper[4805]: E0217 01:09:07.787969 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:09:10 crc kubenswrapper[4805]: E0217 01:09:10.825138 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:09:20 crc kubenswrapper[4805]: E0217 01:09:20.787601 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:09:25 crc kubenswrapper[4805]: E0217 01:09:25.788020 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:09:32 crc kubenswrapper[4805]: E0217 01:09:32.790019 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:09:40 crc kubenswrapper[4805]: E0217 01:09:40.788344 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:09:47 crc kubenswrapper[4805]: E0217 01:09:47.787756 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:09:53 crc kubenswrapper[4805]: I0217 01:09:53.077877 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:09:53 crc kubenswrapper[4805]: I0217 01:09:53.078530 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:09:54 crc kubenswrapper[4805]: E0217 01:09:54.800880 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:10:00 crc kubenswrapper[4805]: E0217 01:10:00.787143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:10:07 crc kubenswrapper[4805]: E0217 01:10:07.789200 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:10:14 crc kubenswrapper[4805]: E0217 01:10:14.804652 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.819967 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:17 crc kubenswrapper[4805]: E0217 01:10:17.821933 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5ab321-ffbb-45a2-8cec-03034de09b60" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.822031 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5ab321-ffbb-45a2-8cec-03034de09b60" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.822289 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5ab321-ffbb-45a2-8cec-03034de09b60" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.823896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.840968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.992701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.993249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:17 crc kubenswrapper[4805]: I0217 01:10:17.993399 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqzf\" (UniqueName: \"kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.095008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.095065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqzf\" (UniqueName: \"kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.095203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.095688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.095926 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.123292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqzf\" (UniqueName: \"kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf\") pod \"certified-operators-phpfd\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.186956 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:18 crc kubenswrapper[4805]: I0217 01:10:18.704364 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:19 crc kubenswrapper[4805]: I0217 01:10:19.004831 4805 generic.go:334] "Generic (PLEG): container finished" podID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerID="16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307" exitCode=0 Feb 17 01:10:19 crc kubenswrapper[4805]: I0217 01:10:19.004904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerDied","Data":"16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307"} Feb 17 01:10:19 crc kubenswrapper[4805]: I0217 01:10:19.004966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerStarted","Data":"1d676907f17df9d54e0f749b9226d753de4953136810a255d9241e199ca2a13e"} Feb 17 01:10:20 crc kubenswrapper[4805]: I0217 01:10:20.019391 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerStarted","Data":"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780"} Feb 17 01:10:22 crc kubenswrapper[4805]: I0217 01:10:22.044664 4805 generic.go:334] "Generic (PLEG): container finished" podID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerID="a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780" exitCode=0 Feb 17 01:10:22 crc kubenswrapper[4805]: I0217 01:10:22.044731 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerDied","Data":"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780"} Feb 17 01:10:22 crc kubenswrapper[4805]: E0217 01:10:22.787458 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:10:23 crc kubenswrapper[4805]: I0217 01:10:23.061210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerStarted","Data":"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0"} Feb 17 01:10:23 crc kubenswrapper[4805]: I0217 01:10:23.077078 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:10:23 crc kubenswrapper[4805]: I0217 01:10:23.077149 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:10:23 crc kubenswrapper[4805]: I0217 01:10:23.084151 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phpfd" podStartSLOduration=2.6145449259999998 podStartE2EDuration="6.084134148s" podCreationTimestamp="2026-02-17 01:10:17 +0000 UTC" firstStartedPulling="2026-02-17 01:10:19.00751872 +0000 UTC m=+2845.023328158" lastFinishedPulling="2026-02-17 01:10:22.477107972 +0000 UTC m=+2848.492917380" observedRunningTime="2026-02-17 01:10:23.083365617 +0000 UTC m=+2849.099175045" watchObservedRunningTime="2026-02-17 01:10:23.084134148 +0000 UTC m=+2849.099943546" Feb 17 01:10:28 crc kubenswrapper[4805]: I0217 01:10:28.187197 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:28 crc kubenswrapper[4805]: I0217 01:10:28.187710 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:28 crc kubenswrapper[4805]: I0217 01:10:28.261276 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:29 crc kubenswrapper[4805]: I0217 01:10:29.196804 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:29 crc kubenswrapper[4805]: I0217 01:10:29.265100 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:29 crc kubenswrapper[4805]: E0217 01:10:29.788704 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:10:30 crc kubenswrapper[4805]: I0217 01:10:30.925784 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:30 crc kubenswrapper[4805]: I0217 01:10:30.929193 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:30 crc kubenswrapper[4805]: I0217 01:10:30.946403 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.128751 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxv4\" (UniqueName: \"kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.129052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.129604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.159211 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phpfd" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="registry-server" containerID="cri-o://18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0" gracePeriod=2 Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.231670 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.231845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.231917 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxv4\" (UniqueName: \"kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.232816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.233116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.267499 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxv4\" (UniqueName: \"kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4\") pod \"redhat-marketplace-5qhj2\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.294055 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.719791 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.840656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.844005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content\") pod \"0a01c5de-51fd-43d7-bfba-503b8ed21888\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.844189 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities\") pod \"0a01c5de-51fd-43d7-bfba-503b8ed21888\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.844572 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhqzf\" (UniqueName: \"kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf\") pod \"0a01c5de-51fd-43d7-bfba-503b8ed21888\" (UID: \"0a01c5de-51fd-43d7-bfba-503b8ed21888\") " Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.846115 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities" (OuterVolumeSpecName: "utilities") pod "0a01c5de-51fd-43d7-bfba-503b8ed21888" (UID: "0a01c5de-51fd-43d7-bfba-503b8ed21888"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.855543 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf" (OuterVolumeSpecName: "kube-api-access-fhqzf") pod "0a01c5de-51fd-43d7-bfba-503b8ed21888" (UID: "0a01c5de-51fd-43d7-bfba-503b8ed21888"). InnerVolumeSpecName "kube-api-access-fhqzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.890763 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a01c5de-51fd-43d7-bfba-503b8ed21888" (UID: "0a01c5de-51fd-43d7-bfba-503b8ed21888"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.947260 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhqzf\" (UniqueName: \"kubernetes.io/projected/0a01c5de-51fd-43d7-bfba-503b8ed21888-kube-api-access-fhqzf\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.947830 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:31 crc kubenswrapper[4805]: I0217 01:10:31.947842 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a01c5de-51fd-43d7-bfba-503b8ed21888-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.173703 4805 generic.go:334] "Generic (PLEG): container finished" podID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerID="18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0" exitCode=0 Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.173787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerDied","Data":"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0"} Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.173823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phpfd" event={"ID":"0a01c5de-51fd-43d7-bfba-503b8ed21888","Type":"ContainerDied","Data":"1d676907f17df9d54e0f749b9226d753de4953136810a255d9241e199ca2a13e"} Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.173816 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phpfd" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.173879 4805 scope.go:117] "RemoveContainer" containerID="18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.176729 4805 generic.go:334] "Generic (PLEG): container finished" podID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerID="79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf" exitCode=0 Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.176749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerDied","Data":"79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf"} Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.176764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerStarted","Data":"a3780e6ef1339a566b25eab8b7998aa197bddbfb5f1c720feb276aee2ea7f203"} Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.222627 4805 scope.go:117] "RemoveContainer" containerID="a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.230102 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.248348 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phpfd"] Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.264448 4805 scope.go:117] "RemoveContainer" containerID="16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307" Feb 17 01:10:32 crc kubenswrapper[4805]: E0217 01:10:32.349318 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a01c5de_51fd_43d7_bfba_503b8ed21888.slice/crio-1d676907f17df9d54e0f749b9226d753de4953136810a255d9241e199ca2a13e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a01c5de_51fd_43d7_bfba_503b8ed21888.slice\": RecentStats: unable to find data in memory cache]" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.361276 4805 scope.go:117] "RemoveContainer" containerID="18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0" Feb 17 01:10:32 crc kubenswrapper[4805]: E0217 01:10:32.361790 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0\": container with ID starting with 18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0 not found: ID does not exist" containerID="18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.361918 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0"} err="failed to get container status \"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0\": rpc error: code = NotFound desc = could not find container \"18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0\": container with ID starting with 18ad609cfb6415b4ca63d7c3e545e70a1363f15bcdfdc52837d45c4d814e8fb0 not found: ID does not exist" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.362004 4805 scope.go:117] "RemoveContainer" containerID="a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780" Feb 17 01:10:32 crc kubenswrapper[4805]: E0217 01:10:32.362402 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780\": container with ID starting with a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780 not found: ID does not exist" containerID="a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.362554 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780"} err="failed to get container status \"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780\": rpc error: code = NotFound desc = could not find container \"a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780\": container with ID starting with a9c76bcad72b751a753e0c81631a7e460840e11b2707f366e1a6b921ebbe4780 not found: ID does not exist" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.362644 4805 scope.go:117] "RemoveContainer" containerID="16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307" Feb 17 01:10:32 crc kubenswrapper[4805]: E0217 01:10:32.363128 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307\": container with ID starting with 16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307 not found: ID does not exist" containerID="16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.363175 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307"} err="failed to get container status \"16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307\": rpc error: code = NotFound desc = could not find container \"16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307\": container with ID starting with 16e420b79d7fafcdbfcbd95406f0cb50175ea1df92f8973602614fb6e9559307 not found: ID does not exist" Feb 17 01:10:32 crc kubenswrapper[4805]: I0217 01:10:32.797188 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" path="/var/lib/kubelet/pods/0a01c5de-51fd-43d7-bfba-503b8ed21888/volumes" Feb 17 01:10:33 crc kubenswrapper[4805]: I0217 01:10:33.189665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerStarted","Data":"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10"} Feb 17 01:10:34 crc kubenswrapper[4805]: I0217 01:10:34.205429 4805 generic.go:334] "Generic (PLEG): container finished" podID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerID="ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10" exitCode=0 Feb 17 01:10:34 crc kubenswrapper[4805]: I0217 01:10:34.205498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerDied","Data":"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10"} Feb 17 01:10:35 crc kubenswrapper[4805]: I0217 01:10:35.215779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerStarted","Data":"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa"} Feb 17 01:10:35 crc kubenswrapper[4805]: I0217 01:10:35.233471 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5qhj2" podStartSLOduration=2.8000800249999998 podStartE2EDuration="5.233456328s" podCreationTimestamp="2026-02-17 01:10:30 +0000 UTC" firstStartedPulling="2026-02-17 01:10:32.1795061 +0000 UTC m=+2858.195315498" lastFinishedPulling="2026-02-17 01:10:34.612882363 +0000 UTC m=+2860.628691801" observedRunningTime="2026-02-17 01:10:35.232177682 +0000 UTC m=+2861.247987080" watchObservedRunningTime="2026-02-17 01:10:35.233456328 +0000 UTC m=+2861.249265726" Feb 17 01:10:37 crc kubenswrapper[4805]: E0217 01:10:37.788582 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:10:41 crc kubenswrapper[4805]: I0217 01:10:41.294539 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:41 crc kubenswrapper[4805]: I0217 01:10:41.295103 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:41 crc kubenswrapper[4805]: I0217 01:10:41.369198 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:42 crc kubenswrapper[4805]: I0217 01:10:42.391775 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:42 crc kubenswrapper[4805]: I0217 01:10:42.441522 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:43 crc kubenswrapper[4805]: E0217 01:10:43.787143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.317537 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5qhj2" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="registry-server" containerID="cri-o://7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa" gracePeriod=2 Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.848338 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.944987 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbxv4\" (UniqueName: \"kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4\") pod \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.945223 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content\") pod \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.945583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities\") pod \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\" (UID: \"1cc7d28c-f2f7-4933-adee-972e04d4b3f5\") " Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.946412 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities" (OuterVolumeSpecName: "utilities") pod "1cc7d28c-f2f7-4933-adee-972e04d4b3f5" (UID: "1cc7d28c-f2f7-4933-adee-972e04d4b3f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.951413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4" (OuterVolumeSpecName: "kube-api-access-cbxv4") pod "1cc7d28c-f2f7-4933-adee-972e04d4b3f5" (UID: "1cc7d28c-f2f7-4933-adee-972e04d4b3f5"). InnerVolumeSpecName "kube-api-access-cbxv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:10:44 crc kubenswrapper[4805]: I0217 01:10:44.970173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cc7d28c-f2f7-4933-adee-972e04d4b3f5" (UID: "1cc7d28c-f2f7-4933-adee-972e04d4b3f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.048704 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.048751 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbxv4\" (UniqueName: \"kubernetes.io/projected/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-kube-api-access-cbxv4\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.048771 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cc7d28c-f2f7-4933-adee-972e04d4b3f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.334705 4805 generic.go:334] "Generic (PLEG): container finished" podID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerID="7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa" exitCode=0 Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.334766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerDied","Data":"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa"} Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.334813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qhj2" event={"ID":"1cc7d28c-f2f7-4933-adee-972e04d4b3f5","Type":"ContainerDied","Data":"a3780e6ef1339a566b25eab8b7998aa197bddbfb5f1c720feb276aee2ea7f203"} Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.334842 4805 scope.go:117] "RemoveContainer" containerID="7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.335569 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qhj2" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.375838 4805 scope.go:117] "RemoveContainer" containerID="ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.396940 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.409046 4805 scope.go:117] "RemoveContainer" containerID="79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.410879 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qhj2"] Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.495911 4805 scope.go:117] "RemoveContainer" containerID="7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa" Feb 17 01:10:45 crc kubenswrapper[4805]: E0217 01:10:45.496806 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa\": container with ID starting with 7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa not found: ID does not exist" containerID="7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.496900 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa"} err="failed to get container status \"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa\": rpc error: code = NotFound desc = could not find container \"7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa\": container with ID starting with 7c33cde3efc6b9b92bdb2a8718ee34673e97380f642e7577a901d2ffec4a18aa not found: ID does not exist" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.496927 4805 scope.go:117] "RemoveContainer" containerID="ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10" Feb 17 01:10:45 crc kubenswrapper[4805]: E0217 01:10:45.497273 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10\": container with ID starting with ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10 not found: ID does not exist" containerID="ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.497311 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10"} err="failed to get container status \"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10\": rpc error: code = NotFound desc = could not find container \"ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10\": container with ID starting with ecd9a31102e49e53909efb232e719edf0d040ae9d2796ecf63e27bef4c208d10 not found: ID does not exist" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.497344 4805 scope.go:117] "RemoveContainer" containerID="79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf" Feb 17 01:10:45 crc kubenswrapper[4805]: E0217 01:10:45.497742 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf\": container with ID starting with 79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf not found: ID does not exist" containerID="79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf" Feb 17 01:10:45 crc kubenswrapper[4805]: I0217 01:10:45.497762 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf"} err="failed to get container status \"79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf\": rpc error: code = NotFound desc = could not find container \"79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf\": container with ID starting with 79791ff5adcab282cc9d7dd4078ba216a61eb70442676aa44bc95ce666d410bf not found: ID does not exist" Feb 17 01:10:46 crc kubenswrapper[4805]: I0217 01:10:46.817437 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" path="/var/lib/kubelet/pods/1cc7d28c-f2f7-4933-adee-972e04d4b3f5/volumes" Feb 17 01:10:52 crc kubenswrapper[4805]: E0217 01:10:52.787084 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.077307 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.077422 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.077484 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.079314 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.079449 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" gracePeriod=600 Feb 17 01:10:53 crc kubenswrapper[4805]: E0217 01:10:53.206785 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.461475 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" exitCode=0 Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.461538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba"} Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.461584 4805 scope.go:117] "RemoveContainer" containerID="eb1f775cd3f02bf701232127480fc07531993028f084a510226723c0e5ae9ba3" Feb 17 01:10:53 crc kubenswrapper[4805]: I0217 01:10:53.463128 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:10:53 crc kubenswrapper[4805]: E0217 01:10:53.464189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:10:55 crc kubenswrapper[4805]: E0217 01:10:55.788937 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.186679 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188261 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="extract-content" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188295 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="extract-content" Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188380 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="extract-utilities" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188402 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="extract-utilities" Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188480 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188501 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188538 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="extract-utilities" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188553 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="extract-utilities" Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188591 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="extract-content" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188608 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="extract-content" Feb 17 01:11:04 crc kubenswrapper[4805]: E0217 01:11:04.188629 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.188646 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.189182 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a01c5de-51fd-43d7-bfba-503b8ed21888" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.189240 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc7d28c-f2f7-4933-adee-972e04d4b3f5" containerName="registry-server" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.192836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.213129 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.256616 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.257050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hwx\" (UniqueName: \"kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.257164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.360432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.360688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22hwx\" (UniqueName: \"kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.360761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.361161 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.361754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.393532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hwx\" (UniqueName: \"kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx\") pod \"community-operators-bx6ct\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:04 crc kubenswrapper[4805]: I0217 01:11:04.539167 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:05 crc kubenswrapper[4805]: I0217 01:11:05.698259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:05 crc kubenswrapper[4805]: I0217 01:11:05.786043 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:11:05 crc kubenswrapper[4805]: E0217 01:11:05.786836 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:11:06 crc kubenswrapper[4805]: I0217 01:11:06.662846 4805 generic.go:334] "Generic (PLEG): container finished" podID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerID="7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46" exitCode=0 Feb 17 01:11:06 crc kubenswrapper[4805]: I0217 01:11:06.662949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerDied","Data":"7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46"} Feb 17 01:11:06 crc kubenswrapper[4805]: I0217 01:11:06.663285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerStarted","Data":"450df4b2cc5df28800e339944c033a495b7e62819585020d484abcdde35b6760"} Feb 17 01:11:07 crc kubenswrapper[4805]: I0217 01:11:07.680477 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerStarted","Data":"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf"} Feb 17 01:11:07 crc kubenswrapper[4805]: E0217 01:11:07.787303 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:11:07 crc kubenswrapper[4805]: E0217 01:11:07.788119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:11:09 crc kubenswrapper[4805]: I0217 01:11:09.710218 4805 generic.go:334] "Generic (PLEG): container finished" podID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerID="67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf" exitCode=0 Feb 17 01:11:09 crc kubenswrapper[4805]: I0217 01:11:09.710498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerDied","Data":"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf"} Feb 17 01:11:10 crc kubenswrapper[4805]: I0217 01:11:10.723614 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerStarted","Data":"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2"} Feb 17 01:11:10 crc kubenswrapper[4805]: I0217 01:11:10.748997 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bx6ct" podStartSLOduration=3.272304543 podStartE2EDuration="6.748975583s" podCreationTimestamp="2026-02-17 01:11:04 +0000 UTC" firstStartedPulling="2026-02-17 01:11:06.666034449 +0000 UTC m=+2892.681843887" lastFinishedPulling="2026-02-17 01:11:10.142705499 +0000 UTC m=+2896.158514927" observedRunningTime="2026-02-17 01:11:10.741200816 +0000 UTC m=+2896.757010254" watchObservedRunningTime="2026-02-17 01:11:10.748975583 +0000 UTC m=+2896.764784981" Feb 17 01:11:14 crc kubenswrapper[4805]: I0217 01:11:14.539773 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:14 crc kubenswrapper[4805]: I0217 01:11:14.540690 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:14 crc kubenswrapper[4805]: I0217 01:11:14.659868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:19 crc kubenswrapper[4805]: I0217 01:11:19.785715 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:11:19 crc kubenswrapper[4805]: E0217 01:11:19.786551 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.060395 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7"] Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.062818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.067715 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.068466 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.068501 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.068583 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.068627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.092677 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7"] Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.250435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.250738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.250812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.250983 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.251098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2prg\" (UniqueName: \"kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.251435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.251650 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.353495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.354053 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.354234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.354913 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.355037 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.355157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2prg\" (UniqueName: \"kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.355292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.361428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.361650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.362491 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.363192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.364057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.369260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.379528 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2prg\" (UniqueName: \"kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:20 crc kubenswrapper[4805]: I0217 01:11:20.395738 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:11:21 crc kubenswrapper[4805]: I0217 01:11:21.040848 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7"] Feb 17 01:11:21 crc kubenswrapper[4805]: W0217 01:11:21.050792 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc866feaf_36a5_4fe7_b8e7_1ba3de81424f.slice/crio-75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8 WatchSource:0}: Error finding container 75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8: Status 404 returned error can't find the container with id 75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8 Feb 17 01:11:21 crc kubenswrapper[4805]: E0217 01:11:21.788823 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:11:21 crc kubenswrapper[4805]: I0217 01:11:21.877204 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" event={"ID":"c866feaf-36a5-4fe7-b8e7-1ba3de81424f","Type":"ContainerStarted","Data":"edbf9af57c92da3480469057caa59f3fe42b82f5d4b797034e2dff5fc5b61d62"} Feb 17 01:11:21 crc kubenswrapper[4805]: I0217 01:11:21.877272 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" event={"ID":"c866feaf-36a5-4fe7-b8e7-1ba3de81424f","Type":"ContainerStarted","Data":"75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8"} Feb 17 01:11:21 crc kubenswrapper[4805]: I0217 01:11:21.907953 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" podStartSLOduration=1.462971132 podStartE2EDuration="1.907932027s" podCreationTimestamp="2026-02-17 01:11:20 +0000 UTC" firstStartedPulling="2026-02-17 01:11:21.054399166 +0000 UTC m=+2907.070208574" lastFinishedPulling="2026-02-17 01:11:21.499360031 +0000 UTC m=+2907.515169469" observedRunningTime="2026-02-17 01:11:21.899894172 +0000 UTC m=+2907.915703570" watchObservedRunningTime="2026-02-17 01:11:21.907932027 +0000 UTC m=+2907.923741425" Feb 17 01:11:22 crc kubenswrapper[4805]: E0217 01:11:22.787685 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:11:24 crc kubenswrapper[4805]: I0217 01:11:24.632724 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:24 crc kubenswrapper[4805]: I0217 01:11:24.719724 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:24 crc kubenswrapper[4805]: I0217 01:11:24.911264 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bx6ct" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="registry-server" containerID="cri-o://4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2" gracePeriod=2 Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.533399 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.689557 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22hwx\" (UniqueName: \"kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx\") pod \"731e64dc-d554-4303-bc5f-a9965bb9141e\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.689705 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content\") pod \"731e64dc-d554-4303-bc5f-a9965bb9141e\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.689762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities\") pod \"731e64dc-d554-4303-bc5f-a9965bb9141e\" (UID: \"731e64dc-d554-4303-bc5f-a9965bb9141e\") " Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.691286 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities" (OuterVolumeSpecName: "utilities") pod "731e64dc-d554-4303-bc5f-a9965bb9141e" (UID: "731e64dc-d554-4303-bc5f-a9965bb9141e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.695610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx" (OuterVolumeSpecName: "kube-api-access-22hwx") pod "731e64dc-d554-4303-bc5f-a9965bb9141e" (UID: "731e64dc-d554-4303-bc5f-a9965bb9141e"). InnerVolumeSpecName "kube-api-access-22hwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.735006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "731e64dc-d554-4303-bc5f-a9965bb9141e" (UID: "731e64dc-d554-4303-bc5f-a9965bb9141e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.793187 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.793827 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731e64dc-d554-4303-bc5f-a9965bb9141e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.793938 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22hwx\" (UniqueName: \"kubernetes.io/projected/731e64dc-d554-4303-bc5f-a9965bb9141e-kube-api-access-22hwx\") on node \"crc\" DevicePath \"\"" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.928855 4805 generic.go:334] "Generic (PLEG): container finished" podID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerID="4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2" exitCode=0 Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.928919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerDied","Data":"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2"} Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.928969 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bx6ct" event={"ID":"731e64dc-d554-4303-bc5f-a9965bb9141e","Type":"ContainerDied","Data":"450df4b2cc5df28800e339944c033a495b7e62819585020d484abcdde35b6760"} Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.929000 4805 scope.go:117] "RemoveContainer" containerID="4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.929445 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bx6ct" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.954276 4805 scope.go:117] "RemoveContainer" containerID="67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf" Feb 17 01:11:25 crc kubenswrapper[4805]: I0217 01:11:25.984706 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.004682 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bx6ct"] Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.005382 4805 scope.go:117] "RemoveContainer" containerID="7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.046703 4805 scope.go:117] "RemoveContainer" containerID="4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2" Feb 17 01:11:26 crc kubenswrapper[4805]: E0217 01:11:26.047463 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2\": container with ID starting with 4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2 not found: ID does not exist" containerID="4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.047529 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2"} err="failed to get container status \"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2\": rpc error: code = NotFound desc = could not find container \"4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2\": container with ID starting with 4231d1ff6f4294009ac65efa34527dd1ab01ffdcc4d76fdde58c596b0af0e0c2 not found: ID does not exist" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.047569 4805 scope.go:117] "RemoveContainer" containerID="67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf" Feb 17 01:11:26 crc kubenswrapper[4805]: E0217 01:11:26.048073 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf\": container with ID starting with 67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf not found: ID does not exist" containerID="67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.048143 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf"} err="failed to get container status \"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf\": rpc error: code = NotFound desc = could not find container \"67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf\": container with ID starting with 67baa4f526a856d6855b5be0d9c0a8d36edc498a13cced2bf744a5aeff48b0cf not found: ID does not exist" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.048183 4805 scope.go:117] "RemoveContainer" containerID="7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46" Feb 17 01:11:26 crc kubenswrapper[4805]: E0217 01:11:26.048574 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46\": container with ID starting with 7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46 not found: ID does not exist" containerID="7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.048618 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46"} err="failed to get container status \"7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46\": rpc error: code = NotFound desc = could not find container \"7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46\": container with ID starting with 7d9361fb41c6fa23551f8e5701d148fce63905d2d91a3834f91bec8a71e54d46 not found: ID does not exist" Feb 17 01:11:26 crc kubenswrapper[4805]: I0217 01:11:26.803808 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" path="/var/lib/kubelet/pods/731e64dc-d554-4303-bc5f-a9965bb9141e/volumes" Feb 17 01:11:30 crc kubenswrapper[4805]: I0217 01:11:30.784890 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:11:30 crc kubenswrapper[4805]: E0217 01:11:30.785826 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:11:33 crc kubenswrapper[4805]: E0217 01:11:33.788887 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:11:33 crc kubenswrapper[4805]: E0217 01:11:33.788946 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.538675 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:11:40 crc kubenswrapper[4805]: E0217 01:11:40.540027 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="registry-server" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.540051 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="registry-server" Feb 17 01:11:40 crc kubenswrapper[4805]: E0217 01:11:40.540087 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="extract-content" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.540100 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="extract-content" Feb 17 01:11:40 crc kubenswrapper[4805]: E0217 01:11:40.540123 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="extract-utilities" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.540136 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="extract-utilities" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.540509 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="731e64dc-d554-4303-bc5f-a9965bb9141e" containerName="registry-server" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.543363 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.569700 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.678991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldvl\" (UniqueName: \"kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.679430 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.679489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.781052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.781160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.781229 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gldvl\" (UniqueName: \"kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.782042 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.782344 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.809620 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gldvl\" (UniqueName: \"kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl\") pod \"redhat-operators-hdw9l\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:40 crc kubenswrapper[4805]: I0217 01:11:40.887319 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:41 crc kubenswrapper[4805]: I0217 01:11:41.362180 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:11:41 crc kubenswrapper[4805]: W0217 01:11:41.374500 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19335622_9351_4d0e_abde_cadb2a44b19d.slice/crio-944caed72601d76f2542d92a32724e35a1894579019be473e142b8726e9b95b7 WatchSource:0}: Error finding container 944caed72601d76f2542d92a32724e35a1894579019be473e142b8726e9b95b7: Status 404 returned error can't find the container with id 944caed72601d76f2542d92a32724e35a1894579019be473e142b8726e9b95b7 Feb 17 01:11:41 crc kubenswrapper[4805]: I0217 01:11:41.785413 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:11:41 crc kubenswrapper[4805]: E0217 01:11:41.785973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:11:42 crc kubenswrapper[4805]: I0217 01:11:42.128784 4805 generic.go:334] "Generic (PLEG): container finished" podID="19335622-9351-4d0e-abde-cadb2a44b19d" containerID="0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1" exitCode=0 Feb 17 01:11:42 crc kubenswrapper[4805]: I0217 01:11:42.128821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerDied","Data":"0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1"} Feb 17 01:11:42 crc kubenswrapper[4805]: I0217 01:11:42.128847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerStarted","Data":"944caed72601d76f2542d92a32724e35a1894579019be473e142b8726e9b95b7"} Feb 17 01:11:43 crc kubenswrapper[4805]: I0217 01:11:43.141962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerStarted","Data":"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22"} Feb 17 01:11:46 crc kubenswrapper[4805]: I0217 01:11:46.193160 4805 generic.go:334] "Generic (PLEG): container finished" podID="19335622-9351-4d0e-abde-cadb2a44b19d" containerID="a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22" exitCode=0 Feb 17 01:11:46 crc kubenswrapper[4805]: I0217 01:11:46.193259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerDied","Data":"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22"} Feb 17 01:11:46 crc kubenswrapper[4805]: E0217 01:11:46.787241 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:11:48 crc kubenswrapper[4805]: I0217 01:11:48.255476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerStarted","Data":"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6"} Feb 17 01:11:48 crc kubenswrapper[4805]: I0217 01:11:48.277294 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdw9l" podStartSLOduration=3.468114999 podStartE2EDuration="8.277259779s" podCreationTimestamp="2026-02-17 01:11:40 +0000 UTC" firstStartedPulling="2026-02-17 01:11:42.131521333 +0000 UTC m=+2928.147330741" lastFinishedPulling="2026-02-17 01:11:46.940666103 +0000 UTC m=+2932.956475521" observedRunningTime="2026-02-17 01:11:48.269927783 +0000 UTC m=+2934.285737221" watchObservedRunningTime="2026-02-17 01:11:48.277259779 +0000 UTC m=+2934.293069187" Feb 17 01:11:48 crc kubenswrapper[4805]: E0217 01:11:48.787483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:11:50 crc kubenswrapper[4805]: I0217 01:11:50.888269 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:50 crc kubenswrapper[4805]: I0217 01:11:50.888703 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:11:51 crc kubenswrapper[4805]: I0217 01:11:51.964416 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hdw9l" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="registry-server" probeResult="failure" output=< Feb 17 01:11:51 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:11:51 crc kubenswrapper[4805]: > Feb 17 01:11:56 crc kubenswrapper[4805]: I0217 01:11:56.785285 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:11:56 crc kubenswrapper[4805]: E0217 01:11:56.786684 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:11:57 crc kubenswrapper[4805]: E0217 01:11:57.788579 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:12:00 crc kubenswrapper[4805]: I0217 01:12:00.968855 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:12:01 crc kubenswrapper[4805]: I0217 01:12:01.029768 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:12:01 crc kubenswrapper[4805]: I0217 01:12:01.208971 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:12:02 crc kubenswrapper[4805]: I0217 01:12:02.430495 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdw9l" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="registry-server" containerID="cri-o://cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6" gracePeriod=2 Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.016450 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.063234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities\") pod \"19335622-9351-4d0e-abde-cadb2a44b19d\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.063397 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content\") pod \"19335622-9351-4d0e-abde-cadb2a44b19d\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.063429 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gldvl\" (UniqueName: \"kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl\") pod \"19335622-9351-4d0e-abde-cadb2a44b19d\" (UID: \"19335622-9351-4d0e-abde-cadb2a44b19d\") " Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.064241 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities" (OuterVolumeSpecName: "utilities") pod "19335622-9351-4d0e-abde-cadb2a44b19d" (UID: "19335622-9351-4d0e-abde-cadb2a44b19d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.073568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl" (OuterVolumeSpecName: "kube-api-access-gldvl") pod "19335622-9351-4d0e-abde-cadb2a44b19d" (UID: "19335622-9351-4d0e-abde-cadb2a44b19d"). InnerVolumeSpecName "kube-api-access-gldvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.165918 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gldvl\" (UniqueName: \"kubernetes.io/projected/19335622-9351-4d0e-abde-cadb2a44b19d-kube-api-access-gldvl\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.165949 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.202356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19335622-9351-4d0e-abde-cadb2a44b19d" (UID: "19335622-9351-4d0e-abde-cadb2a44b19d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.267153 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19335622-9351-4d0e-abde-cadb2a44b19d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.446513 4805 generic.go:334] "Generic (PLEG): container finished" podID="19335622-9351-4d0e-abde-cadb2a44b19d" containerID="cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6" exitCode=0 Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.446641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerDied","Data":"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6"} Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.446693 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdw9l" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.446940 4805 scope.go:117] "RemoveContainer" containerID="cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.446922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdw9l" event={"ID":"19335622-9351-4d0e-abde-cadb2a44b19d","Type":"ContainerDied","Data":"944caed72601d76f2542d92a32724e35a1894579019be473e142b8726e9b95b7"} Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.478933 4805 scope.go:117] "RemoveContainer" containerID="a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.505454 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.512995 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdw9l"] Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.526010 4805 scope.go:117] "RemoveContainer" containerID="0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.575857 4805 scope.go:117] "RemoveContainer" containerID="cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6" Feb 17 01:12:03 crc kubenswrapper[4805]: E0217 01:12:03.576356 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6\": container with ID starting with cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6 not found: ID does not exist" containerID="cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.576389 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6"} err="failed to get container status \"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6\": rpc error: code = NotFound desc = could not find container \"cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6\": container with ID starting with cb94bd8ec8a3e68f67af352518947a76dd6fae08a1d8c337d4cb93be84c736c6 not found: ID does not exist" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.576410 4805 scope.go:117] "RemoveContainer" containerID="a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22" Feb 17 01:12:03 crc kubenswrapper[4805]: E0217 01:12:03.576660 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22\": container with ID starting with a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22 not found: ID does not exist" containerID="a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.576680 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22"} err="failed to get container status \"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22\": rpc error: code = NotFound desc = could not find container \"a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22\": container with ID starting with a7699441e33572f9283716c1874e87b02dc3d48ede8216ccc7011bcd60113a22 not found: ID does not exist" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.576694 4805 scope.go:117] "RemoveContainer" containerID="0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1" Feb 17 01:12:03 crc kubenswrapper[4805]: E0217 01:12:03.576951 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1\": container with ID starting with 0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1 not found: ID does not exist" containerID="0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1" Feb 17 01:12:03 crc kubenswrapper[4805]: I0217 01:12:03.576983 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1"} err="failed to get container status \"0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1\": rpc error: code = NotFound desc = could not find container \"0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1\": container with ID starting with 0353fa2bc0f3884cf627ef183ba2ae2c485c2b8e375dea190f980a06d8f0b9e1 not found: ID does not exist" Feb 17 01:12:03 crc kubenswrapper[4805]: E0217 01:12:03.786450 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:12:04 crc kubenswrapper[4805]: I0217 01:12:04.804350 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" path="/var/lib/kubelet/pods/19335622-9351-4d0e-abde-cadb2a44b19d/volumes" Feb 17 01:12:08 crc kubenswrapper[4805]: I0217 01:12:08.786127 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:12:08 crc kubenswrapper[4805]: E0217 01:12:08.789379 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:12:12 crc kubenswrapper[4805]: E0217 01:12:12.787571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:12:14 crc kubenswrapper[4805]: I0217 01:12:14.577731 4805 generic.go:334] "Generic (PLEG): container finished" podID="c866feaf-36a5-4fe7-b8e7-1ba3de81424f" containerID="edbf9af57c92da3480469057caa59f3fe42b82f5d4b797034e2dff5fc5b61d62" exitCode=2 Feb 17 01:12:14 crc kubenswrapper[4805]: I0217 01:12:14.577844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" event={"ID":"c866feaf-36a5-4fe7-b8e7-1ba3de81424f","Type":"ContainerDied","Data":"edbf9af57c92da3480469057caa59f3fe42b82f5d4b797034e2dff5fc5b61d62"} Feb 17 01:12:14 crc kubenswrapper[4805]: E0217 01:12:14.802167 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.050425 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061152 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.061597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2prg\" (UniqueName: \"kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg\") pod \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\" (UID: \"c866feaf-36a5-4fe7-b8e7-1ba3de81424f\") " Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.067941 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.071780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg" (OuterVolumeSpecName: "kube-api-access-d2prg") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "kube-api-access-d2prg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.105799 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.106220 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.115111 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.118323 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory" (OuterVolumeSpecName: "inventory") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.134781 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c866feaf-36a5-4fe7-b8e7-1ba3de81424f" (UID: "c866feaf-36a5-4fe7-b8e7-1ba3de81424f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.164886 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165003 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2prg\" (UniqueName: \"kubernetes.io/projected/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-kube-api-access-d2prg\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165060 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165111 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165177 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165229 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.165296 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c866feaf-36a5-4fe7-b8e7-1ba3de81424f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.604801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" event={"ID":"c866feaf-36a5-4fe7-b8e7-1ba3de81424f","Type":"ContainerDied","Data":"75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8"} Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.604842 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75bb767231df0fbf23cad3649eb8dc2e53b66bbdeac5a91088207242dc1364e8" Feb 17 01:12:16 crc kubenswrapper[4805]: I0217 01:12:16.604919 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7" Feb 17 01:12:22 crc kubenswrapper[4805]: I0217 01:12:22.785762 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:12:22 crc kubenswrapper[4805]: E0217 01:12:22.788459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:12:26 crc kubenswrapper[4805]: E0217 01:12:26.788630 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:12:27 crc kubenswrapper[4805]: E0217 01:12:27.787615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:12:34 crc kubenswrapper[4805]: I0217 01:12:34.802194 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:12:34 crc kubenswrapper[4805]: E0217 01:12:34.803487 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:12:37 crc kubenswrapper[4805]: E0217 01:12:37.789008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:12:38 crc kubenswrapper[4805]: E0217 01:12:38.788793 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:12:47 crc kubenswrapper[4805]: I0217 01:12:47.786423 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:12:47 crc kubenswrapper[4805]: E0217 01:12:47.788017 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:12:49 crc kubenswrapper[4805]: E0217 01:12:49.788821 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:12:51 crc kubenswrapper[4805]: E0217 01:12:51.786430 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:12:59 crc kubenswrapper[4805]: I0217 01:12:59.784912 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:12:59 crc kubenswrapper[4805]: E0217 01:12:59.785880 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:13:00 crc kubenswrapper[4805]: E0217 01:13:00.790248 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:13:02 crc kubenswrapper[4805]: E0217 01:13:02.788383 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:13:10 crc kubenswrapper[4805]: I0217 01:13:10.785195 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:13:10 crc kubenswrapper[4805]: E0217 01:13:10.786600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:13:12 crc kubenswrapper[4805]: E0217 01:13:12.789854 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:13:17 crc kubenswrapper[4805]: E0217 01:13:17.787990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:13:23 crc kubenswrapper[4805]: I0217 01:13:23.786551 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:13:23 crc kubenswrapper[4805]: E0217 01:13:23.787767 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:13:23 crc kubenswrapper[4805]: E0217 01:13:23.788439 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:13:32 crc kubenswrapper[4805]: E0217 01:13:32.788905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:13:35 crc kubenswrapper[4805]: E0217 01:13:35.795463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:13:37 crc kubenswrapper[4805]: I0217 01:13:37.785002 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:13:37 crc kubenswrapper[4805]: E0217 01:13:37.786063 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:13:43 crc kubenswrapper[4805]: I0217 01:13:43.786618 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:13:43 crc kubenswrapper[4805]: E0217 01:13:43.915664 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:13:43 crc kubenswrapper[4805]: E0217 01:13:43.915738 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:13:43 crc kubenswrapper[4805]: E0217 01:13:43.915862 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:13:43 crc kubenswrapper[4805]: E0217 01:13:43.917030 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:13:46 crc kubenswrapper[4805]: E0217 01:13:46.886880 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:13:46 crc kubenswrapper[4805]: E0217 01:13:46.887361 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:13:46 crc kubenswrapper[4805]: E0217 01:13:46.887566 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:13:46 crc kubenswrapper[4805]: E0217 01:13:46.888762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:13:50 crc kubenswrapper[4805]: I0217 01:13:50.785782 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:13:50 crc kubenswrapper[4805]: E0217 01:13:50.786946 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:13:57 crc kubenswrapper[4805]: E0217 01:13:57.787417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:14:00 crc kubenswrapper[4805]: E0217 01:14:00.792267 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:14:02 crc kubenswrapper[4805]: I0217 01:14:02.787423 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:14:02 crc kubenswrapper[4805]: E0217 01:14:02.788211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:14:11 crc kubenswrapper[4805]: E0217 01:14:11.788699 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:14:12 crc kubenswrapper[4805]: E0217 01:14:12.787658 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:14:14 crc kubenswrapper[4805]: I0217 01:14:14.805109 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:14:14 crc kubenswrapper[4805]: E0217 01:14:14.805395 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:14:23 crc kubenswrapper[4805]: E0217 01:14:23.789236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:14:25 crc kubenswrapper[4805]: I0217 01:14:25.785599 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:14:25 crc kubenswrapper[4805]: E0217 01:14:25.786143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:14:25 crc kubenswrapper[4805]: E0217 01:14:25.788366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:14:37 crc kubenswrapper[4805]: I0217 01:14:37.785715 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:14:37 crc kubenswrapper[4805]: E0217 01:14:37.787278 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:14:38 crc kubenswrapper[4805]: E0217 01:14:38.787199 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:14:39 crc kubenswrapper[4805]: E0217 01:14:39.787263 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:14:48 crc kubenswrapper[4805]: I0217 01:14:48.785232 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:14:48 crc kubenswrapper[4805]: E0217 01:14:48.788087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:14:52 crc kubenswrapper[4805]: E0217 01:14:52.787751 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:14:52 crc kubenswrapper[4805]: E0217 01:14:52.789054 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.155752 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp"] Feb 17 01:15:00 crc kubenswrapper[4805]: E0217 01:15:00.156685 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="registry-server" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.156699 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="registry-server" Feb 17 01:15:00 crc kubenswrapper[4805]: E0217 01:15:00.156714 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="extract-utilities" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.156720 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="extract-utilities" Feb 17 01:15:00 crc kubenswrapper[4805]: E0217 01:15:00.156739 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c866feaf-36a5-4fe7-b8e7-1ba3de81424f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.156748 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c866feaf-36a5-4fe7-b8e7-1ba3de81424f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:15:00 crc kubenswrapper[4805]: E0217 01:15:00.156757 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="extract-content" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.156763 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="extract-content" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.156974 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c866feaf-36a5-4fe7-b8e7-1ba3de81424f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.157007 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="19335622-9351-4d0e-abde-cadb2a44b19d" containerName="registry-server" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.157729 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.162745 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.163030 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.170837 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp"] Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.258397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.258451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.258609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6k5\" (UniqueName: \"kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.360853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m6k5\" (UniqueName: \"kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.360953 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.360986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.361761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.366517 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.379342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m6k5\" (UniqueName: \"kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5\") pod \"collect-profiles-29521515-wqbhp\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.491733 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:00 crc kubenswrapper[4805]: I0217 01:15:00.993061 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp"] Feb 17 01:15:01 crc kubenswrapper[4805]: I0217 01:15:01.689661 4805 generic.go:334] "Generic (PLEG): container finished" podID="ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" containerID="fcf4d70ad741ca3fc5e1d41470e839c0f11ec13b436c352020164c71e4d4deea" exitCode=0 Feb 17 01:15:01 crc kubenswrapper[4805]: I0217 01:15:01.689732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" event={"ID":"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6","Type":"ContainerDied","Data":"fcf4d70ad741ca3fc5e1d41470e839c0f11ec13b436c352020164c71e4d4deea"} Feb 17 01:15:01 crc kubenswrapper[4805]: I0217 01:15:01.689804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" event={"ID":"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6","Type":"ContainerStarted","Data":"dc6b6922ac5f035904596aea50ee3ea90a126d06fe2cd781f5609431b9d9d860"} Feb 17 01:15:01 crc kubenswrapper[4805]: I0217 01:15:01.785274 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:15:01 crc kubenswrapper[4805]: E0217 01:15:01.786242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.234817 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.326620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume\") pod \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.326699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume\") pod \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.326721 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m6k5\" (UniqueName: \"kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5\") pod \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\" (UID: \"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6\") " Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.328143 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" (UID: "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.332907 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" (UID: "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.334083 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5" (OuterVolumeSpecName: "kube-api-access-4m6k5") pod "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" (UID: "ac4e61b3-9a4a-497e-b65c-b61b5b09feb6"). InnerVolumeSpecName "kube-api-access-4m6k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.430175 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.430218 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.430232 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m6k5\" (UniqueName: \"kubernetes.io/projected/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6-kube-api-access-4m6k5\") on node \"crc\" DevicePath \"\"" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.714680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" event={"ID":"ac4e61b3-9a4a-497e-b65c-b61b5b09feb6","Type":"ContainerDied","Data":"dc6b6922ac5f035904596aea50ee3ea90a126d06fe2cd781f5609431b9d9d860"} Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.714740 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6b6922ac5f035904596aea50ee3ea90a126d06fe2cd781f5609431b9d9d860" Feb 17 01:15:03 crc kubenswrapper[4805]: I0217 01:15:03.714817 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp" Feb 17 01:15:04 crc kubenswrapper[4805]: I0217 01:15:04.319092 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49"] Feb 17 01:15:04 crc kubenswrapper[4805]: I0217 01:15:04.331853 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-82z49"] Feb 17 01:15:04 crc kubenswrapper[4805]: E0217 01:15:04.799466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:04 crc kubenswrapper[4805]: I0217 01:15:04.799886 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7121f994-a6dc-4821-9f9b-f21ef4e212fe" path="/var/lib/kubelet/pods/7121f994-a6dc-4821-9f9b-f21ef4e212fe/volumes" Feb 17 01:15:07 crc kubenswrapper[4805]: E0217 01:15:07.788665 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:15:14 crc kubenswrapper[4805]: I0217 01:15:14.806458 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:15:14 crc kubenswrapper[4805]: E0217 01:15:14.807615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:15:16 crc kubenswrapper[4805]: E0217 01:15:16.789693 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:19 crc kubenswrapper[4805]: I0217 01:15:19.797648 4805 scope.go:117] "RemoveContainer" containerID="7de9ee56286f6e03b2db57118e5510929b543d1e2598155979bfc52d5571a49b" Feb 17 01:15:21 crc kubenswrapper[4805]: E0217 01:15:21.788835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:15:27 crc kubenswrapper[4805]: I0217 01:15:27.785622 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:15:27 crc kubenswrapper[4805]: E0217 01:15:27.786557 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:15:27 crc kubenswrapper[4805]: E0217 01:15:27.787524 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:33 crc kubenswrapper[4805]: E0217 01:15:33.787915 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:15:40 crc kubenswrapper[4805]: I0217 01:15:40.785091 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:15:40 crc kubenswrapper[4805]: E0217 01:15:40.786161 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:15:40 crc kubenswrapper[4805]: E0217 01:15:40.788662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:45 crc kubenswrapper[4805]: E0217 01:15:45.788467 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:15:54 crc kubenswrapper[4805]: E0217 01:15:54.795717 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:15:55 crc kubenswrapper[4805]: I0217 01:15:55.785088 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:15:56 crc kubenswrapper[4805]: I0217 01:15:56.380318 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b"} Feb 17 01:15:57 crc kubenswrapper[4805]: E0217 01:15:57.787990 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:16:06 crc kubenswrapper[4805]: E0217 01:16:06.789610 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:16:08 crc kubenswrapper[4805]: E0217 01:16:08.787569 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:16:18 crc kubenswrapper[4805]: E0217 01:16:18.786980 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:16:21 crc kubenswrapper[4805]: E0217 01:16:21.787275 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:16:29 crc kubenswrapper[4805]: E0217 01:16:29.787558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:16:34 crc kubenswrapper[4805]: E0217 01:16:34.805788 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:16:44 crc kubenswrapper[4805]: E0217 01:16:44.798997 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:16:49 crc kubenswrapper[4805]: E0217 01:16:49.786387 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:16:55 crc kubenswrapper[4805]: E0217 01:16:55.789861 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:17:00 crc kubenswrapper[4805]: E0217 01:17:00.787870 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:17:06 crc kubenswrapper[4805]: E0217 01:17:06.788420 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:17:12 crc kubenswrapper[4805]: E0217 01:17:12.790571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:17:21 crc kubenswrapper[4805]: E0217 01:17:21.788019 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:17:26 crc kubenswrapper[4805]: E0217 01:17:26.786135 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.044520 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78"] Feb 17 01:17:34 crc kubenswrapper[4805]: E0217 01:17:34.045471 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" containerName="collect-profiles" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.045485 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" containerName="collect-profiles" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.045699 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" containerName="collect-profiles" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.046505 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062125 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wh24s" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062412 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062452 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062499 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.062497 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78"] Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.149936 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.150024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.150414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.150599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.150739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.150937 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qj7l\" (UniqueName: \"kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.151145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.255687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qj7l\" (UniqueName: \"kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.255894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.256000 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.256051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.256233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.256396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.256520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.267492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.269155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.269546 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.269653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.270008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.270291 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.279053 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qj7l\" (UniqueName: \"kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bmf78\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: I0217 01:17:34.377170 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:17:34 crc kubenswrapper[4805]: E0217 01:17:34.801090 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:17:35 crc kubenswrapper[4805]: I0217 01:17:35.031366 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78"] Feb 17 01:17:35 crc kubenswrapper[4805]: I0217 01:17:35.647205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" event={"ID":"9f253d42-6a7d-4e45-94e3-52965a6880a4","Type":"ContainerStarted","Data":"aa47dfd304f354a10fe37b098be283b5a472fd8d0dbab479011f9ba186d41ee5"} Feb 17 01:17:36 crc kubenswrapper[4805]: I0217 01:17:36.662409 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" event={"ID":"9f253d42-6a7d-4e45-94e3-52965a6880a4","Type":"ContainerStarted","Data":"9b2b4bf4d6d6b493c9f9ac0dd5380677c31d1ca4ef974d2884376054f25e48b1"} Feb 17 01:17:36 crc kubenswrapper[4805]: I0217 01:17:36.692256 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" podStartSLOduration=2.141922244 podStartE2EDuration="2.692233935s" podCreationTimestamp="2026-02-17 01:17:34 +0000 UTC" firstStartedPulling="2026-02-17 01:17:35.031569107 +0000 UTC m=+3281.047378495" lastFinishedPulling="2026-02-17 01:17:35.581880768 +0000 UTC m=+3281.597690186" observedRunningTime="2026-02-17 01:17:36.691461223 +0000 UTC m=+3282.707270651" watchObservedRunningTime="2026-02-17 01:17:36.692233935 +0000 UTC m=+3282.708043343" Feb 17 01:17:37 crc kubenswrapper[4805]: E0217 01:17:37.786974 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:17:45 crc kubenswrapper[4805]: E0217 01:17:45.797586 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:17:49 crc kubenswrapper[4805]: E0217 01:17:49.786876 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:17:56 crc kubenswrapper[4805]: E0217 01:17:56.787931 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:18:01 crc kubenswrapper[4805]: E0217 01:18:01.790865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:18:09 crc kubenswrapper[4805]: E0217 01:18:09.790925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:18:12 crc kubenswrapper[4805]: E0217 01:18:12.788690 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:18:22 crc kubenswrapper[4805]: E0217 01:18:22.789661 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:18:23 crc kubenswrapper[4805]: I0217 01:18:23.076908 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:18:23 crc kubenswrapper[4805]: I0217 01:18:23.076989 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:18:23 crc kubenswrapper[4805]: E0217 01:18:23.787249 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:18:26 crc kubenswrapper[4805]: I0217 01:18:26.365692 4805 generic.go:334] "Generic (PLEG): container finished" podID="9f253d42-6a7d-4e45-94e3-52965a6880a4" containerID="9b2b4bf4d6d6b493c9f9ac0dd5380677c31d1ca4ef974d2884376054f25e48b1" exitCode=2 Feb 17 01:18:26 crc kubenswrapper[4805]: I0217 01:18:26.365777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" event={"ID":"9f253d42-6a7d-4e45-94e3-52965a6880a4","Type":"ContainerDied","Data":"9b2b4bf4d6d6b493c9f9ac0dd5380677c31d1ca4ef974d2884376054f25e48b1"} Feb 17 01:18:27 crc kubenswrapper[4805]: I0217 01:18:27.940278 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.045413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.045787 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.045880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.046023 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.046046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qj7l\" (UniqueName: \"kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.046538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.046654 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle\") pod \"9f253d42-6a7d-4e45-94e3-52965a6880a4\" (UID: \"9f253d42-6a7d-4e45-94e3-52965a6880a4\") " Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.053617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.056542 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l" (OuterVolumeSpecName: "kube-api-access-5qj7l") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "kube-api-access-5qj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.081764 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory" (OuterVolumeSpecName: "inventory") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.084886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.088139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.101776 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.108830 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9f253d42-6a7d-4e45-94e3-52965a6880a4" (UID: "9f253d42-6a7d-4e45-94e3-52965a6880a4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149745 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149788 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149803 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149817 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149832 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qj7l\" (UniqueName: \"kubernetes.io/projected/9f253d42-6a7d-4e45-94e3-52965a6880a4-kube-api-access-5qj7l\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149845 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.149858 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f253d42-6a7d-4e45-94e3-52965a6880a4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.399745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" event={"ID":"9f253d42-6a7d-4e45-94e3-52965a6880a4","Type":"ContainerDied","Data":"aa47dfd304f354a10fe37b098be283b5a472fd8d0dbab479011f9ba186d41ee5"} Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.399803 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa47dfd304f354a10fe37b098be283b5a472fd8d0dbab479011f9ba186d41ee5" Feb 17 01:18:28 crc kubenswrapper[4805]: I0217 01:18:28.399881 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bmf78" Feb 17 01:18:34 crc kubenswrapper[4805]: E0217 01:18:34.806097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:18:38 crc kubenswrapper[4805]: E0217 01:18:38.788490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:18:49 crc kubenswrapper[4805]: I0217 01:18:49.787117 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:18:49 crc kubenswrapper[4805]: E0217 01:18:49.919725 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:18:49 crc kubenswrapper[4805]: E0217 01:18:49.920278 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:18:49 crc kubenswrapper[4805]: E0217 01:18:49.920448 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:18:49 crc kubenswrapper[4805]: E0217 01:18:49.921811 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:18:50 crc kubenswrapper[4805]: E0217 01:18:50.933909 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:18:50 crc kubenswrapper[4805]: E0217 01:18:50.933974 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:18:50 crc kubenswrapper[4805]: E0217 01:18:50.934091 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:18:50 crc kubenswrapper[4805]: E0217 01:18:50.935265 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:18:53 crc kubenswrapper[4805]: I0217 01:18:53.077414 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:18:53 crc kubenswrapper[4805]: I0217 01:18:53.077785 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:19:01 crc kubenswrapper[4805]: E0217 01:19:01.789757 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:19:02 crc kubenswrapper[4805]: E0217 01:19:02.787582 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:19:14 crc kubenswrapper[4805]: E0217 01:19:14.803121 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:19:14 crc kubenswrapper[4805]: E0217 01:19:14.803130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:19:23 crc kubenswrapper[4805]: I0217 01:19:23.077039 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:19:23 crc kubenswrapper[4805]: I0217 01:19:23.077913 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:19:23 crc kubenswrapper[4805]: I0217 01:19:23.077982 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:19:23 crc kubenswrapper[4805]: I0217 01:19:23.079033 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:19:23 crc kubenswrapper[4805]: I0217 01:19:23.079106 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b" gracePeriod=600 Feb 17 01:19:24 crc kubenswrapper[4805]: I0217 01:19:24.172590 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b" exitCode=0 Feb 17 01:19:24 crc kubenswrapper[4805]: I0217 01:19:24.172630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b"} Feb 17 01:19:24 crc kubenswrapper[4805]: I0217 01:19:24.173226 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7"} Feb 17 01:19:24 crc kubenswrapper[4805]: I0217 01:19:24.173254 4805 scope.go:117] "RemoveContainer" containerID="6de4a9af56198ba0af9bae5c9d5a5959493200debfe135f627016d3a13b525ba" Feb 17 01:19:25 crc kubenswrapper[4805]: E0217 01:19:25.787097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:19:28 crc kubenswrapper[4805]: E0217 01:19:28.791523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:19:40 crc kubenswrapper[4805]: E0217 01:19:40.789251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:19:42 crc kubenswrapper[4805]: E0217 01:19:42.787016 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:19:53 crc kubenswrapper[4805]: E0217 01:19:53.793739 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:19:55 crc kubenswrapper[4805]: E0217 01:19:55.789916 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:20:08 crc kubenswrapper[4805]: E0217 01:20:08.788585 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:20:08 crc kubenswrapper[4805]: E0217 01:20:08.788790 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:20:19 crc kubenswrapper[4805]: E0217 01:20:19.787992 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:20:23 crc kubenswrapper[4805]: E0217 01:20:23.788060 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:20:30 crc kubenswrapper[4805]: E0217 01:20:30.789250 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:20:36 crc kubenswrapper[4805]: E0217 01:20:36.787191 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:20:45 crc kubenswrapper[4805]: E0217 01:20:45.787814 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.736281 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:20:49 crc kubenswrapper[4805]: E0217 01:20:49.737653 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f253d42-6a7d-4e45-94e3-52965a6880a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.737678 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f253d42-6a7d-4e45-94e3-52965a6880a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.738046 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f253d42-6a7d-4e45-94e3-52965a6880a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.742382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.762963 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:20:49 crc kubenswrapper[4805]: E0217 01:20:49.786713 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.898671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.898723 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:49 crc kubenswrapper[4805]: I0217 01:20:49.899102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwsj\" (UniqueName: \"kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.000601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwsj\" (UniqueName: \"kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.000764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.000792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.001196 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.001596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.020069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwsj\" (UniqueName: \"kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj\") pod \"redhat-marketplace-lxrgn\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.063653 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:20:50 crc kubenswrapper[4805]: I0217 01:20:50.535687 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:20:50 crc kubenswrapper[4805]: W0217 01:20:50.549079 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode68a50fa_6c2a_4282_974e_c355eca6f003.slice/crio-b23195f4e13b79513eed6587e2ace4829b4da412387aa4e5609b986873d47743 WatchSource:0}: Error finding container b23195f4e13b79513eed6587e2ace4829b4da412387aa4e5609b986873d47743: Status 404 returned error can't find the container with id b23195f4e13b79513eed6587e2ace4829b4da412387aa4e5609b986873d47743 Feb 17 01:20:51 crc kubenswrapper[4805]: I0217 01:20:51.318135 4805 generic.go:334] "Generic (PLEG): container finished" podID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerID="ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def" exitCode=0 Feb 17 01:20:51 crc kubenswrapper[4805]: I0217 01:20:51.318212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerDied","Data":"ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def"} Feb 17 01:20:51 crc kubenswrapper[4805]: I0217 01:20:51.318761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerStarted","Data":"b23195f4e13b79513eed6587e2ace4829b4da412387aa4e5609b986873d47743"} Feb 17 01:20:52 crc kubenswrapper[4805]: I0217 01:20:52.340426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerStarted","Data":"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843"} Feb 17 01:20:53 crc kubenswrapper[4805]: I0217 01:20:53.355428 4805 generic.go:334] "Generic (PLEG): container finished" podID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerID="025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843" exitCode=0 Feb 17 01:20:53 crc kubenswrapper[4805]: I0217 01:20:53.355549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerDied","Data":"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843"} Feb 17 01:20:54 crc kubenswrapper[4805]: I0217 01:20:54.371948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerStarted","Data":"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4"} Feb 17 01:20:54 crc kubenswrapper[4805]: I0217 01:20:54.399234 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxrgn" podStartSLOduration=2.9707209519999997 podStartE2EDuration="5.399212498s" podCreationTimestamp="2026-02-17 01:20:49 +0000 UTC" firstStartedPulling="2026-02-17 01:20:51.32057686 +0000 UTC m=+3477.336386258" lastFinishedPulling="2026-02-17 01:20:53.749068366 +0000 UTC m=+3479.764877804" observedRunningTime="2026-02-17 01:20:54.3921632 +0000 UTC m=+3480.407972628" watchObservedRunningTime="2026-02-17 01:20:54.399212498 +0000 UTC m=+3480.415021936" Feb 17 01:20:57 crc kubenswrapper[4805]: E0217 01:20:57.786925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:21:00 crc kubenswrapper[4805]: I0217 01:21:00.065488 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:00 crc kubenswrapper[4805]: I0217 01:21:00.066092 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:00 crc kubenswrapper[4805]: I0217 01:21:00.139017 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:00 crc kubenswrapper[4805]: I0217 01:21:00.500691 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:00 crc kubenswrapper[4805]: I0217 01:21:00.569378 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:21:01 crc kubenswrapper[4805]: E0217 01:21:01.788003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:21:02 crc kubenswrapper[4805]: I0217 01:21:02.463391 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxrgn" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="registry-server" containerID="cri-o://683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4" gracePeriod=2 Feb 17 01:21:02 crc kubenswrapper[4805]: I0217 01:21:02.943398 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.117213 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwsj\" (UniqueName: \"kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj\") pod \"e68a50fa-6c2a-4282-974e-c355eca6f003\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.118346 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities\") pod \"e68a50fa-6c2a-4282-974e-c355eca6f003\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.118430 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content\") pod \"e68a50fa-6c2a-4282-974e-c355eca6f003\" (UID: \"e68a50fa-6c2a-4282-974e-c355eca6f003\") " Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.119203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities" (OuterVolumeSpecName: "utilities") pod "e68a50fa-6c2a-4282-974e-c355eca6f003" (UID: "e68a50fa-6c2a-4282-974e-c355eca6f003"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.119464 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.127970 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj" (OuterVolumeSpecName: "kube-api-access-xvwsj") pod "e68a50fa-6c2a-4282-974e-c355eca6f003" (UID: "e68a50fa-6c2a-4282-974e-c355eca6f003"). InnerVolumeSpecName "kube-api-access-xvwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.159032 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e68a50fa-6c2a-4282-974e-c355eca6f003" (UID: "e68a50fa-6c2a-4282-974e-c355eca6f003"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.221675 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvwsj\" (UniqueName: \"kubernetes.io/projected/e68a50fa-6c2a-4282-974e-c355eca6f003-kube-api-access-xvwsj\") on node \"crc\" DevicePath \"\"" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.221706 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e68a50fa-6c2a-4282-974e-c355eca6f003-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.477289 4805 generic.go:334] "Generic (PLEG): container finished" podID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerID="683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4" exitCode=0 Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.477376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerDied","Data":"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4"} Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.477466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxrgn" event={"ID":"e68a50fa-6c2a-4282-974e-c355eca6f003","Type":"ContainerDied","Data":"b23195f4e13b79513eed6587e2ace4829b4da412387aa4e5609b986873d47743"} Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.477463 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxrgn" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.477504 4805 scope.go:117] "RemoveContainer" containerID="683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.521787 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.522700 4805 scope.go:117] "RemoveContainer" containerID="025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.537274 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxrgn"] Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.558088 4805 scope.go:117] "RemoveContainer" containerID="ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.618665 4805 scope.go:117] "RemoveContainer" containerID="683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4" Feb 17 01:21:03 crc kubenswrapper[4805]: E0217 01:21:03.619492 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4\": container with ID starting with 683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4 not found: ID does not exist" containerID="683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.619612 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4"} err="failed to get container status \"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4\": rpc error: code = NotFound desc = could not find container \"683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4\": container with ID starting with 683e454179cc46122c76346c1e8f6f17fe8b2f661dc6a3c55294e8a4132587a4 not found: ID does not exist" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.619643 4805 scope.go:117] "RemoveContainer" containerID="025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843" Feb 17 01:21:03 crc kubenswrapper[4805]: E0217 01:21:03.620271 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843\": container with ID starting with 025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843 not found: ID does not exist" containerID="025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.620388 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843"} err="failed to get container status \"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843\": rpc error: code = NotFound desc = could not find container \"025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843\": container with ID starting with 025ebb3fa74c8ab01839dda4ba79e45801276653aa3df93716ddbf120e08a843 not found: ID does not exist" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.620452 4805 scope.go:117] "RemoveContainer" containerID="ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def" Feb 17 01:21:03 crc kubenswrapper[4805]: E0217 01:21:03.620897 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def\": container with ID starting with ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def not found: ID does not exist" containerID="ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def" Feb 17 01:21:03 crc kubenswrapper[4805]: I0217 01:21:03.620943 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def"} err="failed to get container status \"ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def\": rpc error: code = NotFound desc = could not find container \"ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def\": container with ID starting with ddcc47d6317275f41b03963e658fd0abc52c875506b6d765a7a4a063f9db0def not found: ID does not exist" Feb 17 01:21:04 crc kubenswrapper[4805]: I0217 01:21:04.807681 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" path="/var/lib/kubelet/pods/e68a50fa-6c2a-4282-974e-c355eca6f003/volumes" Feb 17 01:21:08 crc kubenswrapper[4805]: E0217 01:21:08.788240 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:21:13 crc kubenswrapper[4805]: E0217 01:21:13.787831 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:21:20 crc kubenswrapper[4805]: E0217 01:21:20.785501 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:21:23 crc kubenswrapper[4805]: I0217 01:21:23.077758 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:21:23 crc kubenswrapper[4805]: I0217 01:21:23.078406 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:21:25 crc kubenswrapper[4805]: E0217 01:21:25.788071 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:21:34 crc kubenswrapper[4805]: E0217 01:21:34.798442 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:21:37 crc kubenswrapper[4805]: E0217 01:21:37.786768 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:21:45 crc kubenswrapper[4805]: E0217 01:21:45.788165 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:21:50 crc kubenswrapper[4805]: E0217 01:21:50.788236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:21:53 crc kubenswrapper[4805]: I0217 01:21:53.077003 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:21:53 crc kubenswrapper[4805]: I0217 01:21:53.077469 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:21:58 crc kubenswrapper[4805]: E0217 01:21:58.788907 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:22:02 crc kubenswrapper[4805]: E0217 01:22:02.788130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:22:12 crc kubenswrapper[4805]: E0217 01:22:12.789627 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:22:13 crc kubenswrapper[4805]: E0217 01:22:13.787231 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.077421 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.078230 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.078298 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.079627 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.079715 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" gracePeriod=600 Feb 17 01:22:23 crc kubenswrapper[4805]: E0217 01:22:23.209463 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.558189 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" exitCode=0 Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.558242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7"} Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.558297 4805 scope.go:117] "RemoveContainer" containerID="1ac87c45c634ec431b1d3b40eda6479663f849c8605d17f7db9c30debe45ad4b" Feb 17 01:22:23 crc kubenswrapper[4805]: I0217 01:22:23.559287 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:22:23 crc kubenswrapper[4805]: E0217 01:22:23.559911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:22:24 crc kubenswrapper[4805]: E0217 01:22:24.810225 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:22:28 crc kubenswrapper[4805]: E0217 01:22:28.787466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:22:34 crc kubenswrapper[4805]: I0217 01:22:34.802230 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:22:34 crc kubenswrapper[4805]: E0217 01:22:34.803596 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:22:37 crc kubenswrapper[4805]: E0217 01:22:37.788018 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:22:40 crc kubenswrapper[4805]: E0217 01:22:40.787676 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:22:45 crc kubenswrapper[4805]: I0217 01:22:45.785230 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:22:45 crc kubenswrapper[4805]: E0217 01:22:45.786599 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:22:48 crc kubenswrapper[4805]: E0217 01:22:48.789483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:22:52 crc kubenswrapper[4805]: E0217 01:22:52.787292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:22:59 crc kubenswrapper[4805]: E0217 01:22:59.788290 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:23:00 crc kubenswrapper[4805]: I0217 01:23:00.787549 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:23:00 crc kubenswrapper[4805]: E0217 01:23:00.788001 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:23:06 crc kubenswrapper[4805]: E0217 01:23:06.787739 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:23:11 crc kubenswrapper[4805]: E0217 01:23:11.787346 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:23:15 crc kubenswrapper[4805]: I0217 01:23:15.786155 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:23:15 crc kubenswrapper[4805]: E0217 01:23:15.787532 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:23:17 crc kubenswrapper[4805]: E0217 01:23:17.787209 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:23:26 crc kubenswrapper[4805]: E0217 01:23:26.786732 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:23:28 crc kubenswrapper[4805]: E0217 01:23:28.788110 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:23:30 crc kubenswrapper[4805]: I0217 01:23:30.785390 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:23:30 crc kubenswrapper[4805]: E0217 01:23:30.786021 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:23:38 crc kubenswrapper[4805]: E0217 01:23:38.792077 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:23:41 crc kubenswrapper[4805]: E0217 01:23:41.787910 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:23:42 crc kubenswrapper[4805]: I0217 01:23:42.785822 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:23:42 crc kubenswrapper[4805]: E0217 01:23:42.786539 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:23:50 crc kubenswrapper[4805]: I0217 01:23:50.787925 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:23:50 crc kubenswrapper[4805]: E0217 01:23:50.921987 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:23:50 crc kubenswrapper[4805]: E0217 01:23:50.922067 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:23:50 crc kubenswrapper[4805]: E0217 01:23:50.922217 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:23:50 crc kubenswrapper[4805]: E0217 01:23:50.923451 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:23:54 crc kubenswrapper[4805]: I0217 01:23:54.792191 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:23:54 crc kubenswrapper[4805]: E0217 01:23:54.792991 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:23:56 crc kubenswrapper[4805]: E0217 01:23:56.910729 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:23:56 crc kubenswrapper[4805]: E0217 01:23:56.911064 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:23:56 crc kubenswrapper[4805]: E0217 01:23:56.911168 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:23:56 crc kubenswrapper[4805]: E0217 01:23:56.912405 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:24:02 crc kubenswrapper[4805]: E0217 01:24:02.790765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:24:07 crc kubenswrapper[4805]: E0217 01:24:07.787180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:24:08 crc kubenswrapper[4805]: I0217 01:24:08.785313 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:24:08 crc kubenswrapper[4805]: E0217 01:24:08.786288 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:24:15 crc kubenswrapper[4805]: E0217 01:24:15.786855 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:24:20 crc kubenswrapper[4805]: E0217 01:24:20.788126 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:24:22 crc kubenswrapper[4805]: I0217 01:24:22.785377 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:24:22 crc kubenswrapper[4805]: E0217 01:24:22.786198 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:24:28 crc kubenswrapper[4805]: E0217 01:24:28.790481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:24:33 crc kubenswrapper[4805]: I0217 01:24:33.785271 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:24:33 crc kubenswrapper[4805]: E0217 01:24:33.787117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:24:35 crc kubenswrapper[4805]: E0217 01:24:35.787924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:24:43 crc kubenswrapper[4805]: E0217 01:24:43.787543 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:24:47 crc kubenswrapper[4805]: I0217 01:24:47.784788 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:24:47 crc kubenswrapper[4805]: E0217 01:24:47.785536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:24:50 crc kubenswrapper[4805]: E0217 01:24:50.788126 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:24:56 crc kubenswrapper[4805]: E0217 01:24:56.796680 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:25:01 crc kubenswrapper[4805]: I0217 01:25:01.785357 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:25:01 crc kubenswrapper[4805]: E0217 01:25:01.786275 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:25:05 crc kubenswrapper[4805]: E0217 01:25:05.788702 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:25:09 crc kubenswrapper[4805]: E0217 01:25:09.787104 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:25:16 crc kubenswrapper[4805]: I0217 01:25:16.785972 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:25:16 crc kubenswrapper[4805]: E0217 01:25:16.786904 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:25:17 crc kubenswrapper[4805]: E0217 01:25:17.787456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:25:23 crc kubenswrapper[4805]: E0217 01:25:23.787694 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:25:27 crc kubenswrapper[4805]: I0217 01:25:27.785565 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:25:27 crc kubenswrapper[4805]: E0217 01:25:27.786253 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:25:28 crc kubenswrapper[4805]: E0217 01:25:28.788563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:25:34 crc kubenswrapper[4805]: E0217 01:25:34.801846 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:25:40 crc kubenswrapper[4805]: I0217 01:25:40.786410 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:25:40 crc kubenswrapper[4805]: E0217 01:25:40.787217 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:25:43 crc kubenswrapper[4805]: E0217 01:25:43.787998 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:25:48 crc kubenswrapper[4805]: E0217 01:25:48.789685 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:25:51 crc kubenswrapper[4805]: I0217 01:25:51.784402 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:25:51 crc kubenswrapper[4805]: E0217 01:25:51.785087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:25:55 crc kubenswrapper[4805]: E0217 01:25:55.787571 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:26:00 crc kubenswrapper[4805]: E0217 01:26:00.788635 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:26:02 crc kubenswrapper[4805]: I0217 01:26:02.785495 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:26:02 crc kubenswrapper[4805]: E0217 01:26:02.786459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:26:09 crc kubenswrapper[4805]: E0217 01:26:09.786083 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:26:15 crc kubenswrapper[4805]: I0217 01:26:15.786244 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:26:15 crc kubenswrapper[4805]: E0217 01:26:15.787247 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:26:15 crc kubenswrapper[4805]: E0217 01:26:15.788256 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:26:21 crc kubenswrapper[4805]: E0217 01:26:21.789977 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:26:27 crc kubenswrapper[4805]: E0217 01:26:27.786747 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:26:30 crc kubenswrapper[4805]: I0217 01:26:30.784523 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:26:30 crc kubenswrapper[4805]: E0217 01:26:30.785258 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:26:35 crc kubenswrapper[4805]: E0217 01:26:35.788078 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:26:41 crc kubenswrapper[4805]: E0217 01:26:41.789087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:26:42 crc kubenswrapper[4805]: I0217 01:26:42.785583 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:26:42 crc kubenswrapper[4805]: E0217 01:26:42.785918 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:26:47 crc kubenswrapper[4805]: E0217 01:26:47.791481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:26:53 crc kubenswrapper[4805]: I0217 01:26:53.785489 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:26:53 crc kubenswrapper[4805]: E0217 01:26:53.786337 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:26:55 crc kubenswrapper[4805]: E0217 01:26:55.788782 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:26:59 crc kubenswrapper[4805]: E0217 01:26:59.788830 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:27:04 crc kubenswrapper[4805]: I0217 01:27:04.799481 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:27:04 crc kubenswrapper[4805]: E0217 01:27:04.800599 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:27:07 crc kubenswrapper[4805]: E0217 01:27:07.786648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:27:11 crc kubenswrapper[4805]: E0217 01:27:11.790626 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:27:19 crc kubenswrapper[4805]: I0217 01:27:19.786400 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:27:19 crc kubenswrapper[4805]: E0217 01:27:19.787524 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:27:21 crc kubenswrapper[4805]: E0217 01:27:21.789523 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:27:23 crc kubenswrapper[4805]: E0217 01:27:23.788462 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:27:30 crc kubenswrapper[4805]: I0217 01:27:30.786043 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:27:31 crc kubenswrapper[4805]: I0217 01:27:31.634828 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b"} Feb 17 01:27:33 crc kubenswrapper[4805]: E0217 01:27:33.787611 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:27:34 crc kubenswrapper[4805]: E0217 01:27:34.803655 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:27:48 crc kubenswrapper[4805]: E0217 01:27:48.789488 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:27:48 crc kubenswrapper[4805]: E0217 01:27:48.790406 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:27:59 crc kubenswrapper[4805]: E0217 01:27:59.786775 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:28:03 crc kubenswrapper[4805]: E0217 01:28:03.788042 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:28:10 crc kubenswrapper[4805]: E0217 01:28:10.786994 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:28:18 crc kubenswrapper[4805]: E0217 01:28:18.786383 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:28:25 crc kubenswrapper[4805]: E0217 01:28:25.787113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:28:32 crc kubenswrapper[4805]: E0217 01:28:32.789126 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:28:36 crc kubenswrapper[4805]: E0217 01:28:36.806027 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:28:46 crc kubenswrapper[4805]: E0217 01:28:46.789053 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:28:48 crc kubenswrapper[4805]: E0217 01:28:48.787517 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:28:57 crc kubenswrapper[4805]: I0217 01:28:57.787537 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:28:57 crc kubenswrapper[4805]: E0217 01:28:57.923655 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:28:57 crc kubenswrapper[4805]: E0217 01:28:57.923881 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:28:57 crc kubenswrapper[4805]: E0217 01:28:57.924050 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:28:57 crc kubenswrapper[4805]: E0217 01:28:57.925591 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:29:02 crc kubenswrapper[4805]: E0217 01:29:02.919421 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:29:02 crc kubenswrapper[4805]: E0217 01:29:02.919984 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:29:02 crc kubenswrapper[4805]: E0217 01:29:02.920100 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:29:02 crc kubenswrapper[4805]: E0217 01:29:02.921281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:29:10 crc kubenswrapper[4805]: E0217 01:29:10.789039 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:29:14 crc kubenswrapper[4805]: E0217 01:29:14.817725 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:29:25 crc kubenswrapper[4805]: E0217 01:29:25.787260 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:29:27 crc kubenswrapper[4805]: E0217 01:29:27.787449 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:29:39 crc kubenswrapper[4805]: E0217 01:29:39.788004 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:29:42 crc kubenswrapper[4805]: E0217 01:29:42.787901 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:29:53 crc kubenswrapper[4805]: I0217 01:29:53.077237 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:29:53 crc kubenswrapper[4805]: I0217 01:29:53.077856 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:29:53 crc kubenswrapper[4805]: E0217 01:29:53.788158 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:29:56 crc kubenswrapper[4805]: E0217 01:29:56.788936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.181182 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf"] Feb 17 01:30:00 crc kubenswrapper[4805]: E0217 01:30:00.182096 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="extract-utilities" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.182112 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="extract-utilities" Feb 17 01:30:00 crc kubenswrapper[4805]: E0217 01:30:00.182128 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="extract-content" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.182135 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="extract-content" Feb 17 01:30:00 crc kubenswrapper[4805]: E0217 01:30:00.182148 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="registry-server" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.182153 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="registry-server" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.182336 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68a50fa-6c2a-4282-974e-c355eca6f003" containerName="registry-server" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.183171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.185282 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.186269 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.197445 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf"] Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.277901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.278024 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.278064 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgjk\" (UniqueName: \"kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.380377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.380557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.380604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgjk\" (UniqueName: \"kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.381316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.390422 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.400872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgjk\" (UniqueName: \"kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk\") pod \"collect-profiles-29521530-2d9jf\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.513733 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:00 crc kubenswrapper[4805]: I0217 01:30:00.971248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf"] Feb 17 01:30:01 crc kubenswrapper[4805]: I0217 01:30:01.579117 4805 generic.go:334] "Generic (PLEG): container finished" podID="9ab5278c-236f-468f-89f3-e9561a865440" containerID="bbad8b844385edb2c8a7d6f70842841c34c0c842cd420ea98d3ed9e51ad4ed27" exitCode=0 Feb 17 01:30:01 crc kubenswrapper[4805]: I0217 01:30:01.579346 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" event={"ID":"9ab5278c-236f-468f-89f3-e9561a865440","Type":"ContainerDied","Data":"bbad8b844385edb2c8a7d6f70842841c34c0c842cd420ea98d3ed9e51ad4ed27"} Feb 17 01:30:01 crc kubenswrapper[4805]: I0217 01:30:01.579508 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" event={"ID":"9ab5278c-236f-468f-89f3-e9561a865440","Type":"ContainerStarted","Data":"006d66ffeeff86851a1a8e9496fc2f589d475d51c023611cf621ae9429ef9a36"} Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.035512 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.145193 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume\") pod \"9ab5278c-236f-468f-89f3-e9561a865440\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.145503 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckgjk\" (UniqueName: \"kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk\") pod \"9ab5278c-236f-468f-89f3-e9561a865440\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.146723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume\") pod \"9ab5278c-236f-468f-89f3-e9561a865440\" (UID: \"9ab5278c-236f-468f-89f3-e9561a865440\") " Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.147556 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ab5278c-236f-468f-89f3-e9561a865440" (UID: "9ab5278c-236f-468f-89f3-e9561a865440"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.152711 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ab5278c-236f-468f-89f3-e9561a865440" (UID: "9ab5278c-236f-468f-89f3-e9561a865440"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.162567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk" (OuterVolumeSpecName: "kube-api-access-ckgjk") pod "9ab5278c-236f-468f-89f3-e9561a865440" (UID: "9ab5278c-236f-468f-89f3-e9561a865440"). InnerVolumeSpecName "kube-api-access-ckgjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.250280 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckgjk\" (UniqueName: \"kubernetes.io/projected/9ab5278c-236f-468f-89f3-e9561a865440-kube-api-access-ckgjk\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.250371 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab5278c-236f-468f-89f3-e9561a865440-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.250396 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ab5278c-236f-468f-89f3-e9561a865440-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.620780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" event={"ID":"9ab5278c-236f-468f-89f3-e9561a865440","Type":"ContainerDied","Data":"006d66ffeeff86851a1a8e9496fc2f589d475d51c023611cf621ae9429ef9a36"} Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.621134 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006d66ffeeff86851a1a8e9496fc2f589d475d51c023611cf621ae9429ef9a36" Feb 17 01:30:03 crc kubenswrapper[4805]: I0217 01:30:03.620861 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521530-2d9jf" Feb 17 01:30:04 crc kubenswrapper[4805]: I0217 01:30:04.124306 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7"] Feb 17 01:30:04 crc kubenswrapper[4805]: I0217 01:30:04.145868 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521485-8qqm7"] Feb 17 01:30:04 crc kubenswrapper[4805]: E0217 01:30:04.793671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:30:04 crc kubenswrapper[4805]: I0217 01:30:04.804292 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f" path="/var/lib/kubelet/pods/bdbcc59c-a2e7-4e14-bb38-ddaa9abd290f/volumes" Feb 17 01:30:10 crc kubenswrapper[4805]: E0217 01:30:10.789412 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.383849 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:11 crc kubenswrapper[4805]: E0217 01:30:11.384602 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab5278c-236f-468f-89f3-e9561a865440" containerName="collect-profiles" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.384631 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab5278c-236f-468f-89f3-e9561a865440" containerName="collect-profiles" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.384971 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab5278c-236f-468f-89f3-e9561a865440" containerName="collect-profiles" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.387749 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.442709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.483842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.483928 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.484356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwljc\" (UniqueName: \"kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.586873 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwljc\" (UniqueName: \"kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.586941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.586974 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.587496 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.587643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.611083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwljc\" (UniqueName: \"kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc\") pod \"certified-operators-lmddq\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:11 crc kubenswrapper[4805]: I0217 01:30:11.761212 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:12 crc kubenswrapper[4805]: I0217 01:30:12.330518 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:12 crc kubenswrapper[4805]: I0217 01:30:12.744647 4805 generic.go:334] "Generic (PLEG): container finished" podID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerID="cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972" exitCode=0 Feb 17 01:30:12 crc kubenswrapper[4805]: I0217 01:30:12.744753 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerDied","Data":"cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972"} Feb 17 01:30:12 crc kubenswrapper[4805]: I0217 01:30:12.745308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerStarted","Data":"c07e04a0e8ca515fd9c97788069bf4ebbcdf4d3b59b7b2039a392f4f0ba67aef"} Feb 17 01:30:13 crc kubenswrapper[4805]: I0217 01:30:13.761338 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerStarted","Data":"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a"} Feb 17 01:30:15 crc kubenswrapper[4805]: E0217 01:30:15.787105 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:30:15 crc kubenswrapper[4805]: I0217 01:30:15.789262 4805 generic.go:334] "Generic (PLEG): container finished" podID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerID="4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a" exitCode=0 Feb 17 01:30:15 crc kubenswrapper[4805]: I0217 01:30:15.789307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerDied","Data":"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a"} Feb 17 01:30:16 crc kubenswrapper[4805]: I0217 01:30:16.808809 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerStarted","Data":"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905"} Feb 17 01:30:16 crc kubenswrapper[4805]: I0217 01:30:16.855797 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmddq" podStartSLOduration=2.410711576 podStartE2EDuration="5.855727059s" podCreationTimestamp="2026-02-17 01:30:11 +0000 UTC" firstStartedPulling="2026-02-17 01:30:12.746901776 +0000 UTC m=+4038.762711204" lastFinishedPulling="2026-02-17 01:30:16.191917249 +0000 UTC m=+4042.207726687" observedRunningTime="2026-02-17 01:30:16.835991824 +0000 UTC m=+4042.851801262" watchObservedRunningTime="2026-02-17 01:30:16.855727059 +0000 UTC m=+4042.871536497" Feb 17 01:30:20 crc kubenswrapper[4805]: I0217 01:30:20.245539 4805 scope.go:117] "RemoveContainer" containerID="afa9182a1a2d1b025fcc3ba0d28dfa7a791971efc3f8f09d41fe3288741303bc" Feb 17 01:30:21 crc kubenswrapper[4805]: I0217 01:30:21.762423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:21 crc kubenswrapper[4805]: I0217 01:30:21.762878 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:21 crc kubenswrapper[4805]: E0217 01:30:21.786315 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:30:21 crc kubenswrapper[4805]: I0217 01:30:21.821650 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:21 crc kubenswrapper[4805]: I0217 01:30:21.927882 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:22 crc kubenswrapper[4805]: I0217 01:30:22.067136 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:23 crc kubenswrapper[4805]: I0217 01:30:23.077674 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:30:23 crc kubenswrapper[4805]: I0217 01:30:23.078151 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:30:23 crc kubenswrapper[4805]: I0217 01:30:23.900222 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmddq" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="registry-server" containerID="cri-o://bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905" gracePeriod=2 Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.585891 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.620456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwljc\" (UniqueName: \"kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc\") pod \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.620570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content\") pod \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.620827 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities\") pod \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\" (UID: \"40de8bfb-f2cc-4680-a75c-b8100c40bacd\") " Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.622293 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities" (OuterVolumeSpecName: "utilities") pod "40de8bfb-f2cc-4680-a75c-b8100c40bacd" (UID: "40de8bfb-f2cc-4680-a75c-b8100c40bacd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.634375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc" (OuterVolumeSpecName: "kube-api-access-fwljc") pod "40de8bfb-f2cc-4680-a75c-b8100c40bacd" (UID: "40de8bfb-f2cc-4680-a75c-b8100c40bacd"). InnerVolumeSpecName "kube-api-access-fwljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.673603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40de8bfb-f2cc-4680-a75c-b8100c40bacd" (UID: "40de8bfb-f2cc-4680-a75c-b8100c40bacd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.724132 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.724183 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwljc\" (UniqueName: \"kubernetes.io/projected/40de8bfb-f2cc-4680-a75c-b8100c40bacd-kube-api-access-fwljc\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.724200 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40de8bfb-f2cc-4680-a75c-b8100c40bacd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.944381 4805 generic.go:334] "Generic (PLEG): container finished" podID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerID="bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905" exitCode=0 Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.944733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerDied","Data":"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905"} Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.944767 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmddq" event={"ID":"40de8bfb-f2cc-4680-a75c-b8100c40bacd","Type":"ContainerDied","Data":"c07e04a0e8ca515fd9c97788069bf4ebbcdf4d3b59b7b2039a392f4f0ba67aef"} Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.944788 4805 scope.go:117] "RemoveContainer" containerID="bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.944959 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmddq" Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.989180 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:24 crc kubenswrapper[4805]: I0217 01:30:24.994723 4805 scope.go:117] "RemoveContainer" containerID="4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.016548 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmddq"] Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.032496 4805 scope.go:117] "RemoveContainer" containerID="cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.107673 4805 scope.go:117] "RemoveContainer" containerID="bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905" Feb 17 01:30:25 crc kubenswrapper[4805]: E0217 01:30:25.108502 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905\": container with ID starting with bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905 not found: ID does not exist" containerID="bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.108555 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905"} err="failed to get container status \"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905\": rpc error: code = NotFound desc = could not find container \"bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905\": container with ID starting with bcc4ea447693b495500117e7ce3167fd418e516f57d0ebfe1e220b9daf2a5905 not found: ID does not exist" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.108594 4805 scope.go:117] "RemoveContainer" containerID="4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a" Feb 17 01:30:25 crc kubenswrapper[4805]: E0217 01:30:25.109077 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a\": container with ID starting with 4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a not found: ID does not exist" containerID="4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.109125 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a"} err="failed to get container status \"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a\": rpc error: code = NotFound desc = could not find container \"4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a\": container with ID starting with 4d184b2cee3ac26d49ef7f9b3965e7cebb820a6da9c4a7e638192922003e9b6a not found: ID does not exist" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.109156 4805 scope.go:117] "RemoveContainer" containerID="cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972" Feb 17 01:30:25 crc kubenswrapper[4805]: E0217 01:30:25.109498 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972\": container with ID starting with cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972 not found: ID does not exist" containerID="cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972" Feb 17 01:30:25 crc kubenswrapper[4805]: I0217 01:30:25.109536 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972"} err="failed to get container status \"cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972\": rpc error: code = NotFound desc = could not find container \"cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972\": container with ID starting with cb82421d42fff536bcfc244617c819b8d979df0b888338feaf92b19091bd5972 not found: ID does not exist" Feb 17 01:30:26 crc kubenswrapper[4805]: I0217 01:30:26.797320 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" path="/var/lib/kubelet/pods/40de8bfb-f2cc-4680-a75c-b8100c40bacd/volumes" Feb 17 01:30:27 crc kubenswrapper[4805]: E0217 01:30:27.786835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:30:32 crc kubenswrapper[4805]: E0217 01:30:32.788562 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:30:38 crc kubenswrapper[4805]: E0217 01:30:38.787936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:30:44 crc kubenswrapper[4805]: E0217 01:30:44.802648 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:30:51 crc kubenswrapper[4805]: E0217 01:30:51.787590 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.077878 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.077967 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.078028 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.079130 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.079202 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b" gracePeriod=600 Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.316305 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b" exitCode=0 Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.316490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b"} Feb 17 01:30:53 crc kubenswrapper[4805]: I0217 01:30:53.316620 4805 scope.go:117] "RemoveContainer" containerID="9c70ff220fbb4d7b1a518daf8ecd55474ee146fff0d368387d19a8c50108f3e7" Feb 17 01:30:54 crc kubenswrapper[4805]: I0217 01:30:54.334125 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e"} Feb 17 01:30:56 crc kubenswrapper[4805]: E0217 01:30:56.787383 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:31:04 crc kubenswrapper[4805]: E0217 01:31:04.804579 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.377175 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:06 crc kubenswrapper[4805]: E0217 01:31:06.378018 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="extract-utilities" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.378093 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="extract-utilities" Feb 17 01:31:06 crc kubenswrapper[4805]: E0217 01:31:06.378125 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="registry-server" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.378136 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="registry-server" Feb 17 01:31:06 crc kubenswrapper[4805]: E0217 01:31:06.378155 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="extract-content" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.378163 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="extract-content" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.378466 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="40de8bfb-f2cc-4680-a75c-b8100c40bacd" containerName="registry-server" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.380610 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.397780 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.579136 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.579417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.579797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4b7q\" (UniqueName: \"kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.681984 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4b7q\" (UniqueName: \"kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.682123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.682281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.682615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.682861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.704080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4b7q\" (UniqueName: \"kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q\") pod \"redhat-marketplace-zd94h\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:06 crc kubenswrapper[4805]: I0217 01:31:06.712854 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:07 crc kubenswrapper[4805]: I0217 01:31:07.192194 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:07 crc kubenswrapper[4805]: W0217 01:31:07.202186 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc9c2759_bb63_417a_8970_2fa1d3acf675.slice/crio-c3de4acd71fe71085f8396edf98a007936c8f23dc1495be704e99b89edf46ab7 WatchSource:0}: Error finding container c3de4acd71fe71085f8396edf98a007936c8f23dc1495be704e99b89edf46ab7: Status 404 returned error can't find the container with id c3de4acd71fe71085f8396edf98a007936c8f23dc1495be704e99b89edf46ab7 Feb 17 01:31:07 crc kubenswrapper[4805]: I0217 01:31:07.502715 4805 generic.go:334] "Generic (PLEG): container finished" podID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerID="3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17" exitCode=0 Feb 17 01:31:07 crc kubenswrapper[4805]: I0217 01:31:07.502840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerDied","Data":"3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17"} Feb 17 01:31:07 crc kubenswrapper[4805]: I0217 01:31:07.503036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerStarted","Data":"c3de4acd71fe71085f8396edf98a007936c8f23dc1495be704e99b89edf46ab7"} Feb 17 01:31:08 crc kubenswrapper[4805]: I0217 01:31:08.516949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerStarted","Data":"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf"} Feb 17 01:31:09 crc kubenswrapper[4805]: I0217 01:31:09.533619 4805 generic.go:334] "Generic (PLEG): container finished" podID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerID="3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf" exitCode=0 Feb 17 01:31:09 crc kubenswrapper[4805]: I0217 01:31:09.533805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerDied","Data":"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf"} Feb 17 01:31:09 crc kubenswrapper[4805]: E0217 01:31:09.787396 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:31:10 crc kubenswrapper[4805]: I0217 01:31:10.551989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerStarted","Data":"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6"} Feb 17 01:31:10 crc kubenswrapper[4805]: I0217 01:31:10.586590 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zd94h" podStartSLOduration=2.147462698 podStartE2EDuration="4.586566188s" podCreationTimestamp="2026-02-17 01:31:06 +0000 UTC" firstStartedPulling="2026-02-17 01:31:07.504532363 +0000 UTC m=+4093.520341761" lastFinishedPulling="2026-02-17 01:31:09.943635833 +0000 UTC m=+4095.959445251" observedRunningTime="2026-02-17 01:31:10.581697082 +0000 UTC m=+4096.597506480" watchObservedRunningTime="2026-02-17 01:31:10.586566188 +0000 UTC m=+4096.602375606" Feb 17 01:31:15 crc kubenswrapper[4805]: E0217 01:31:15.791484 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:31:16 crc kubenswrapper[4805]: I0217 01:31:16.713670 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:16 crc kubenswrapper[4805]: I0217 01:31:16.713746 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:16 crc kubenswrapper[4805]: I0217 01:31:16.810141 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:17 crc kubenswrapper[4805]: I0217 01:31:17.699880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:17 crc kubenswrapper[4805]: I0217 01:31:17.776159 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:19 crc kubenswrapper[4805]: I0217 01:31:19.668139 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zd94h" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="registry-server" containerID="cri-o://2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6" gracePeriod=2 Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.248497 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.400688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities\") pod \"bc9c2759-bb63-417a-8970-2fa1d3acf675\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.400784 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content\") pod \"bc9c2759-bb63-417a-8970-2fa1d3acf675\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.400816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4b7q\" (UniqueName: \"kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q\") pod \"bc9c2759-bb63-417a-8970-2fa1d3acf675\" (UID: \"bc9c2759-bb63-417a-8970-2fa1d3acf675\") " Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.401586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities" (OuterVolumeSpecName: "utilities") pod "bc9c2759-bb63-417a-8970-2fa1d3acf675" (UID: "bc9c2759-bb63-417a-8970-2fa1d3acf675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.406181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q" (OuterVolumeSpecName: "kube-api-access-h4b7q") pod "bc9c2759-bb63-417a-8970-2fa1d3acf675" (UID: "bc9c2759-bb63-417a-8970-2fa1d3acf675"). InnerVolumeSpecName "kube-api-access-h4b7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.422318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc9c2759-bb63-417a-8970-2fa1d3acf675" (UID: "bc9c2759-bb63-417a-8970-2fa1d3acf675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.503144 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.503179 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc9c2759-bb63-417a-8970-2fa1d3acf675-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.503189 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4b7q\" (UniqueName: \"kubernetes.io/projected/bc9c2759-bb63-417a-8970-2fa1d3acf675-kube-api-access-h4b7q\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.681839 4805 generic.go:334] "Generic (PLEG): container finished" podID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerID="2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6" exitCode=0 Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.681900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerDied","Data":"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6"} Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.681989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zd94h" event={"ID":"bc9c2759-bb63-417a-8970-2fa1d3acf675","Type":"ContainerDied","Data":"c3de4acd71fe71085f8396edf98a007936c8f23dc1495be704e99b89edf46ab7"} Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.682034 4805 scope.go:117] "RemoveContainer" containerID="2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.682852 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zd94h" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.723273 4805 scope.go:117] "RemoveContainer" containerID="3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.735764 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.756980 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zd94h"] Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.761821 4805 scope.go:117] "RemoveContainer" containerID="3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.808869 4805 scope.go:117] "RemoveContainer" containerID="2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6" Feb 17 01:31:20 crc kubenswrapper[4805]: E0217 01:31:20.815601 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6\": container with ID starting with 2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6 not found: ID does not exist" containerID="2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.815691 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6"} err="failed to get container status \"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6\": rpc error: code = NotFound desc = could not find container \"2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6\": container with ID starting with 2cac4de1b01c15581bcde50c4c3a24c1b81318b9d1242199e3c22168c24c32d6 not found: ID does not exist" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.815729 4805 scope.go:117] "RemoveContainer" containerID="3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf" Feb 17 01:31:20 crc kubenswrapper[4805]: E0217 01:31:20.816411 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf\": container with ID starting with 3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf not found: ID does not exist" containerID="3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.816490 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf"} err="failed to get container status \"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf\": rpc error: code = NotFound desc = could not find container \"3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf\": container with ID starting with 3ec93b9a63f29a2534715821c123f5af12a44f2b78e691257394b960264e0faf not found: ID does not exist" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.816544 4805 scope.go:117] "RemoveContainer" containerID="3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17" Feb 17 01:31:20 crc kubenswrapper[4805]: E0217 01:31:20.817092 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17\": container with ID starting with 3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17 not found: ID does not exist" containerID="3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.817147 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17"} err="failed to get container status \"3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17\": rpc error: code = NotFound desc = could not find container \"3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17\": container with ID starting with 3f4eb788f0f019c6c9549c101a3ac37099cca62caf1d6dcea5301427a148ac17 not found: ID does not exist" Feb 17 01:31:20 crc kubenswrapper[4805]: I0217 01:31:20.819988 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" path="/var/lib/kubelet/pods/bc9c2759-bb63-417a-8970-2fa1d3acf675/volumes" Feb 17 01:31:24 crc kubenswrapper[4805]: E0217 01:31:24.792483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:31:28 crc kubenswrapper[4805]: E0217 01:31:28.787860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.935191 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:35 crc kubenswrapper[4805]: E0217 01:31:35.936530 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="extract-utilities" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.936555 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="extract-utilities" Feb 17 01:31:35 crc kubenswrapper[4805]: E0217 01:31:35.936573 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="extract-content" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.936584 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="extract-content" Feb 17 01:31:35 crc kubenswrapper[4805]: E0217 01:31:35.936614 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="registry-server" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.936623 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="registry-server" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.936933 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9c2759-bb63-417a-8970-2fa1d3acf675" containerName="registry-server" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.940856 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:35 crc kubenswrapper[4805]: I0217 01:31:35.953340 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.067137 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2nq7\" (UniqueName: \"kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.067189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.067330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.168876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2nq7\" (UniqueName: \"kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.168923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.169010 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.169527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.169554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.189995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2nq7\" (UniqueName: \"kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7\") pod \"community-operators-nb5kq\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.266085 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:36 crc kubenswrapper[4805]: W0217 01:31:36.820276 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2dee8c_2c6f_43db_a590_e90b48e9af67.slice/crio-3a53b48a34f1644e0f2b5950ead6926875f9044abd56f103780f2c5d1811f9d0 WatchSource:0}: Error finding container 3a53b48a34f1644e0f2b5950ead6926875f9044abd56f103780f2c5d1811f9d0: Status 404 returned error can't find the container with id 3a53b48a34f1644e0f2b5950ead6926875f9044abd56f103780f2c5d1811f9d0 Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.831973 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:36 crc kubenswrapper[4805]: I0217 01:31:36.897366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerStarted","Data":"3a53b48a34f1644e0f2b5950ead6926875f9044abd56f103780f2c5d1811f9d0"} Feb 17 01:31:37 crc kubenswrapper[4805]: I0217 01:31:37.916092 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerID="0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d" exitCode=0 Feb 17 01:31:37 crc kubenswrapper[4805]: I0217 01:31:37.916574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerDied","Data":"0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d"} Feb 17 01:31:38 crc kubenswrapper[4805]: E0217 01:31:38.786254 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:31:38 crc kubenswrapper[4805]: I0217 01:31:38.932472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerStarted","Data":"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f"} Feb 17 01:31:39 crc kubenswrapper[4805]: E0217 01:31:39.787678 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:31:40 crc kubenswrapper[4805]: I0217 01:31:40.959806 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerID="36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f" exitCode=0 Feb 17 01:31:40 crc kubenswrapper[4805]: I0217 01:31:40.959889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerDied","Data":"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f"} Feb 17 01:31:42 crc kubenswrapper[4805]: I0217 01:31:42.985389 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerStarted","Data":"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15"} Feb 17 01:31:43 crc kubenswrapper[4805]: I0217 01:31:43.012557 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nb5kq" podStartSLOduration=4.589382076 podStartE2EDuration="8.012500046s" podCreationTimestamp="2026-02-17 01:31:35 +0000 UTC" firstStartedPulling="2026-02-17 01:31:37.919553636 +0000 UTC m=+4123.935363074" lastFinishedPulling="2026-02-17 01:31:41.342671606 +0000 UTC m=+4127.358481044" observedRunningTime="2026-02-17 01:31:43.00690175 +0000 UTC m=+4129.022711158" watchObservedRunningTime="2026-02-17 01:31:43.012500046 +0000 UTC m=+4129.028309464" Feb 17 01:31:46 crc kubenswrapper[4805]: I0217 01:31:46.267277 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:46 crc kubenswrapper[4805]: I0217 01:31:46.267951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:46 crc kubenswrapper[4805]: I0217 01:31:46.354287 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:47 crc kubenswrapper[4805]: I0217 01:31:47.134174 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:47 crc kubenswrapper[4805]: I0217 01:31:47.202503 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.066853 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nb5kq" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="registry-server" containerID="cri-o://a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15" gracePeriod=2 Feb 17 01:31:49 crc kubenswrapper[4805]: E0217 01:31:49.390698 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2dee8c_2c6f_43db_a590_e90b48e9af67.slice/crio-a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2dee8c_2c6f_43db_a590_e90b48e9af67.slice/crio-conmon-a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15.scope\": RecentStats: unable to find data in memory cache]" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.642491 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.687720 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content\") pod \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.687853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities\") pod \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.687886 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2nq7\" (UniqueName: \"kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7\") pod \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\" (UID: \"2b2dee8c-2c6f-43db-a590-e90b48e9af67\") " Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.691017 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities" (OuterVolumeSpecName: "utilities") pod "2b2dee8c-2c6f-43db-a590-e90b48e9af67" (UID: "2b2dee8c-2c6f-43db-a590-e90b48e9af67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.710746 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7" (OuterVolumeSpecName: "kube-api-access-j2nq7") pod "2b2dee8c-2c6f-43db-a590-e90b48e9af67" (UID: "2b2dee8c-2c6f-43db-a590-e90b48e9af67"). InnerVolumeSpecName "kube-api-access-j2nq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.754904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b2dee8c-2c6f-43db-a590-e90b48e9af67" (UID: "2b2dee8c-2c6f-43db-a590-e90b48e9af67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.790836 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.790879 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2dee8c-2c6f-43db-a590-e90b48e9af67-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:49 crc kubenswrapper[4805]: I0217 01:31:49.790892 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2nq7\" (UniqueName: \"kubernetes.io/projected/2b2dee8c-2c6f-43db-a590-e90b48e9af67-kube-api-access-j2nq7\") on node \"crc\" DevicePath \"\"" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.083977 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerID="a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15" exitCode=0 Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.084040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerDied","Data":"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15"} Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.084065 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nb5kq" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.084085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nb5kq" event={"ID":"2b2dee8c-2c6f-43db-a590-e90b48e9af67","Type":"ContainerDied","Data":"3a53b48a34f1644e0f2b5950ead6926875f9044abd56f103780f2c5d1811f9d0"} Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.084113 4805 scope.go:117] "RemoveContainer" containerID="a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.118566 4805 scope.go:117] "RemoveContainer" containerID="36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.156756 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.164450 4805 scope.go:117] "RemoveContainer" containerID="0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.170711 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nb5kq"] Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.212734 4805 scope.go:117] "RemoveContainer" containerID="a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15" Feb 17 01:31:50 crc kubenswrapper[4805]: E0217 01:31:50.213291 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15\": container with ID starting with a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15 not found: ID does not exist" containerID="a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.213362 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15"} err="failed to get container status \"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15\": rpc error: code = NotFound desc = could not find container \"a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15\": container with ID starting with a4a4ae7cb31cb0bd1dbe3243a8d5a0b4d2fb0a2c0fa7a0ff93c47f7e43fe9d15 not found: ID does not exist" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.213398 4805 scope.go:117] "RemoveContainer" containerID="36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f" Feb 17 01:31:50 crc kubenswrapper[4805]: E0217 01:31:50.213930 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f\": container with ID starting with 36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f not found: ID does not exist" containerID="36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.214003 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f"} err="failed to get container status \"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f\": rpc error: code = NotFound desc = could not find container \"36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f\": container with ID starting with 36b512aeb2d36910db381ef02d679780db326ed033a8a5de41a1b090bc22646f not found: ID does not exist" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.214045 4805 scope.go:117] "RemoveContainer" containerID="0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d" Feb 17 01:31:50 crc kubenswrapper[4805]: E0217 01:31:50.214568 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d\": container with ID starting with 0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d not found: ID does not exist" containerID="0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.214674 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d"} err="failed to get container status \"0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d\": rpc error: code = NotFound desc = could not find container \"0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d\": container with ID starting with 0202ad4a957b7d1e2a7e439fa916c5f84eef3281558eea894071c13dcb4b320d not found: ID does not exist" Feb 17 01:31:50 crc kubenswrapper[4805]: I0217 01:31:50.805041 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" path="/var/lib/kubelet/pods/2b2dee8c-2c6f-43db-a590-e90b48e9af67/volumes" Feb 17 01:31:52 crc kubenswrapper[4805]: E0217 01:31:52.790827 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:31:52 crc kubenswrapper[4805]: E0217 01:31:52.793621 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.309948 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:02 crc kubenswrapper[4805]: E0217 01:32:02.314918 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="extract-utilities" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.314941 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="extract-utilities" Feb 17 01:32:02 crc kubenswrapper[4805]: E0217 01:32:02.314970 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="extract-content" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.314978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="extract-content" Feb 17 01:32:02 crc kubenswrapper[4805]: E0217 01:32:02.314995 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="registry-server" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.315002 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="registry-server" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.315500 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2dee8c-2c6f-43db-a590-e90b48e9af67" containerName="registry-server" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.318090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.350223 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.431973 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.432019 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.432162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg924\" (UniqueName: \"kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.534224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.534299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.534494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg924\" (UniqueName: \"kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.535124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.535623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.558122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg924\" (UniqueName: \"kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924\") pod \"redhat-operators-65svm\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:02 crc kubenswrapper[4805]: I0217 01:32:02.667658 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:03 crc kubenswrapper[4805]: I0217 01:32:03.147045 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:03 crc kubenswrapper[4805]: I0217 01:32:03.255741 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerStarted","Data":"c2392f0bb5b02b450875f92143ccb6c33fad7c096f36d4276be054c3ba864f3c"} Feb 17 01:32:04 crc kubenswrapper[4805]: I0217 01:32:04.272194 4805 generic.go:334] "Generic (PLEG): container finished" podID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerID="587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1" exitCode=0 Feb 17 01:32:04 crc kubenswrapper[4805]: I0217 01:32:04.272284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerDied","Data":"587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1"} Feb 17 01:32:05 crc kubenswrapper[4805]: I0217 01:32:05.287627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerStarted","Data":"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82"} Feb 17 01:32:05 crc kubenswrapper[4805]: E0217 01:32:05.786864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:32:07 crc kubenswrapper[4805]: E0217 01:32:07.786600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:32:08 crc kubenswrapper[4805]: I0217 01:32:08.331512 4805 generic.go:334] "Generic (PLEG): container finished" podID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerID="247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82" exitCode=0 Feb 17 01:32:08 crc kubenswrapper[4805]: I0217 01:32:08.331617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerDied","Data":"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82"} Feb 17 01:32:09 crc kubenswrapper[4805]: I0217 01:32:09.346866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerStarted","Data":"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece"} Feb 17 01:32:09 crc kubenswrapper[4805]: I0217 01:32:09.381499 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65svm" podStartSLOduration=2.858749369 podStartE2EDuration="7.381472083s" podCreationTimestamp="2026-02-17 01:32:02 +0000 UTC" firstStartedPulling="2026-02-17 01:32:04.27519611 +0000 UTC m=+4150.291005548" lastFinishedPulling="2026-02-17 01:32:08.797918824 +0000 UTC m=+4154.813728262" observedRunningTime="2026-02-17 01:32:09.374688814 +0000 UTC m=+4155.390498242" watchObservedRunningTime="2026-02-17 01:32:09.381472083 +0000 UTC m=+4155.397281521" Feb 17 01:32:12 crc kubenswrapper[4805]: I0217 01:32:12.668185 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:12 crc kubenswrapper[4805]: I0217 01:32:12.668819 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:13 crc kubenswrapper[4805]: I0217 01:32:13.938082 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-65svm" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="registry-server" probeResult="failure" output=< Feb 17 01:32:13 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:32:13 crc kubenswrapper[4805]: > Feb 17 01:32:20 crc kubenswrapper[4805]: E0217 01:32:20.800251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:32:20 crc kubenswrapper[4805]: E0217 01:32:20.800780 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:32:22 crc kubenswrapper[4805]: I0217 01:32:22.723813 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:22 crc kubenswrapper[4805]: I0217 01:32:22.804885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:22 crc kubenswrapper[4805]: I0217 01:32:22.976434 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:24 crc kubenswrapper[4805]: I0217 01:32:24.734246 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65svm" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="registry-server" containerID="cri-o://8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece" gracePeriod=2 Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.242879 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.353445 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content\") pod \"08802fa9-c60f-49bc-a71b-64491dfad8d3\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.353654 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities\") pod \"08802fa9-c60f-49bc-a71b-64491dfad8d3\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.353675 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg924\" (UniqueName: \"kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924\") pod \"08802fa9-c60f-49bc-a71b-64491dfad8d3\" (UID: \"08802fa9-c60f-49bc-a71b-64491dfad8d3\") " Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.354406 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities" (OuterVolumeSpecName: "utilities") pod "08802fa9-c60f-49bc-a71b-64491dfad8d3" (UID: "08802fa9-c60f-49bc-a71b-64491dfad8d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.359054 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924" (OuterVolumeSpecName: "kube-api-access-rg924") pod "08802fa9-c60f-49bc-a71b-64491dfad8d3" (UID: "08802fa9-c60f-49bc-a71b-64491dfad8d3"). InnerVolumeSpecName "kube-api-access-rg924". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.456815 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.456860 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg924\" (UniqueName: \"kubernetes.io/projected/08802fa9-c60f-49bc-a71b-64491dfad8d3-kube-api-access-rg924\") on node \"crc\" DevicePath \"\"" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.485475 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08802fa9-c60f-49bc-a71b-64491dfad8d3" (UID: "08802fa9-c60f-49bc-a71b-64491dfad8d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.559549 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08802fa9-c60f-49bc-a71b-64491dfad8d3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.746825 4805 generic.go:334] "Generic (PLEG): container finished" podID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerID="8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece" exitCode=0 Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.746870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerDied","Data":"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece"} Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.746900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65svm" event={"ID":"08802fa9-c60f-49bc-a71b-64491dfad8d3","Type":"ContainerDied","Data":"c2392f0bb5b02b450875f92143ccb6c33fad7c096f36d4276be054c3ba864f3c"} Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.746917 4805 scope.go:117] "RemoveContainer" containerID="8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.748964 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65svm" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.791824 4805 scope.go:117] "RemoveContainer" containerID="247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.814812 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.830543 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65svm"] Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.838394 4805 scope.go:117] "RemoveContainer" containerID="587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.916957 4805 scope.go:117] "RemoveContainer" containerID="8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece" Feb 17 01:32:25 crc kubenswrapper[4805]: E0217 01:32:25.917708 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece\": container with ID starting with 8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece not found: ID does not exist" containerID="8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.917774 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece"} err="failed to get container status \"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece\": rpc error: code = NotFound desc = could not find container \"8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece\": container with ID starting with 8290c06b70b0b487189d76ef54bb0c7f074dc7e2badea6f33f5f9332c93e0ece not found: ID does not exist" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.917838 4805 scope.go:117] "RemoveContainer" containerID="247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82" Feb 17 01:32:25 crc kubenswrapper[4805]: E0217 01:32:25.918916 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82\": container with ID starting with 247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82 not found: ID does not exist" containerID="247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.919132 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82"} err="failed to get container status \"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82\": rpc error: code = NotFound desc = could not find container \"247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82\": container with ID starting with 247d5abc07c020eb7cae421d39a3a11c6da0a207ed55e7f89a475f39696ddc82 not found: ID does not exist" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.919297 4805 scope.go:117] "RemoveContainer" containerID="587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1" Feb 17 01:32:25 crc kubenswrapper[4805]: E0217 01:32:25.919793 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1\": container with ID starting with 587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1 not found: ID does not exist" containerID="587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1" Feb 17 01:32:25 crc kubenswrapper[4805]: I0217 01:32:25.919876 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1"} err="failed to get container status \"587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1\": rpc error: code = NotFound desc = could not find container \"587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1\": container with ID starting with 587cf370d547720f9ac6217ba8533106f17e78316f5d8c4fa60ac0b6f85489d1 not found: ID does not exist" Feb 17 01:32:26 crc kubenswrapper[4805]: I0217 01:32:26.833626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" path="/var/lib/kubelet/pods/08802fa9-c60f-49bc-a71b-64491dfad8d3/volumes" Feb 17 01:32:33 crc kubenswrapper[4805]: E0217 01:32:33.788081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:32:33 crc kubenswrapper[4805]: E0217 01:32:33.788627 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:32:46 crc kubenswrapper[4805]: E0217 01:32:46.805364 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:32:48 crc kubenswrapper[4805]: E0217 01:32:48.788449 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:32:53 crc kubenswrapper[4805]: I0217 01:32:53.076750 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:32:53 crc kubenswrapper[4805]: I0217 01:32:53.077142 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:32:59 crc kubenswrapper[4805]: E0217 01:32:59.787548 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:33:01 crc kubenswrapper[4805]: E0217 01:33:01.788452 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:33:11 crc kubenswrapper[4805]: E0217 01:33:11.787153 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:33:15 crc kubenswrapper[4805]: E0217 01:33:15.787610 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:33:23 crc kubenswrapper[4805]: I0217 01:33:23.076891 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:33:23 crc kubenswrapper[4805]: I0217 01:33:23.077465 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:33:23 crc kubenswrapper[4805]: E0217 01:33:23.787242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:33:27 crc kubenswrapper[4805]: E0217 01:33:27.787994 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:33:35 crc kubenswrapper[4805]: E0217 01:33:35.788513 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:33:42 crc kubenswrapper[4805]: E0217 01:33:42.787670 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:33:47 crc kubenswrapper[4805]: E0217 01:33:47.787475 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.079141 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.079708 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.079774 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.080671 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.080761 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" gracePeriod=600 Feb 17 01:33:53 crc kubenswrapper[4805]: E0217 01:33:53.204178 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.822010 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" exitCode=0 Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.822062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e"} Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.822101 4805 scope.go:117] "RemoveContainer" containerID="e70b524fd8a026bbaa383cfef94d5abcac048725251bed7339bb44bcbe80de3b" Feb 17 01:33:53 crc kubenswrapper[4805]: I0217 01:33:53.823710 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:33:53 crc kubenswrapper[4805]: E0217 01:33:53.824816 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:33:57 crc kubenswrapper[4805]: E0217 01:33:57.788666 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:34:02 crc kubenswrapper[4805]: E0217 01:34:02.787932 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:34:04 crc kubenswrapper[4805]: I0217 01:34:04.794358 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:34:04 crc kubenswrapper[4805]: E0217 01:34:04.794955 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:34:09 crc kubenswrapper[4805]: I0217 01:34:09.787298 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:34:09 crc kubenswrapper[4805]: E0217 01:34:09.917859 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:34:09 crc kubenswrapper[4805]: E0217 01:34:09.917931 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:34:09 crc kubenswrapper[4805]: E0217 01:34:09.918081 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:34:09 crc kubenswrapper[4805]: E0217 01:34:09.919375 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:34:16 crc kubenswrapper[4805]: E0217 01:34:16.933497 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:34:16 crc kubenswrapper[4805]: E0217 01:34:16.933881 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:34:16 crc kubenswrapper[4805]: E0217 01:34:16.934023 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:34:16 crc kubenswrapper[4805]: E0217 01:34:16.935266 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:34:18 crc kubenswrapper[4805]: I0217 01:34:18.785940 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:34:18 crc kubenswrapper[4805]: E0217 01:34:18.786872 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:34:24 crc kubenswrapper[4805]: E0217 01:34:24.812936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:34:29 crc kubenswrapper[4805]: E0217 01:34:29.786866 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:34:31 crc kubenswrapper[4805]: I0217 01:34:31.785611 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:34:31 crc kubenswrapper[4805]: E0217 01:34:31.786389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:34:36 crc kubenswrapper[4805]: E0217 01:34:36.788352 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:34:41 crc kubenswrapper[4805]: E0217 01:34:41.786869 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:34:44 crc kubenswrapper[4805]: I0217 01:34:44.801742 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:34:44 crc kubenswrapper[4805]: E0217 01:34:44.804497 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:34:50 crc kubenswrapper[4805]: E0217 01:34:50.789292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:34:54 crc kubenswrapper[4805]: E0217 01:34:54.805875 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:34:57 crc kubenswrapper[4805]: I0217 01:34:57.784982 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:34:57 crc kubenswrapper[4805]: E0217 01:34:57.785891 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:35:05 crc kubenswrapper[4805]: E0217 01:35:05.787593 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:35:07 crc kubenswrapper[4805]: E0217 01:35:07.787055 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:35:11 crc kubenswrapper[4805]: I0217 01:35:11.786503 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:35:11 crc kubenswrapper[4805]: E0217 01:35:11.788301 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:35:20 crc kubenswrapper[4805]: E0217 01:35:20.788393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:35:22 crc kubenswrapper[4805]: E0217 01:35:22.788129 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:35:25 crc kubenswrapper[4805]: I0217 01:35:25.785389 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:35:25 crc kubenswrapper[4805]: E0217 01:35:25.785964 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:35:32 crc kubenswrapper[4805]: E0217 01:35:32.789620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:35:34 crc kubenswrapper[4805]: E0217 01:35:34.803080 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:35:39 crc kubenswrapper[4805]: I0217 01:35:39.786590 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:35:39 crc kubenswrapper[4805]: E0217 01:35:39.787629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:35:43 crc kubenswrapper[4805]: E0217 01:35:43.788161 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:35:48 crc kubenswrapper[4805]: E0217 01:35:48.791582 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:35:52 crc kubenswrapper[4805]: I0217 01:35:52.785108 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:35:52 crc kubenswrapper[4805]: E0217 01:35:52.786257 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:35:56 crc kubenswrapper[4805]: E0217 01:35:56.821520 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:36:02 crc kubenswrapper[4805]: E0217 01:36:02.787243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:36:05 crc kubenswrapper[4805]: I0217 01:36:05.785309 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:36:05 crc kubenswrapper[4805]: E0217 01:36:05.786268 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:36:09 crc kubenswrapper[4805]: E0217 01:36:09.787380 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:36:15 crc kubenswrapper[4805]: E0217 01:36:15.787283 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:36:19 crc kubenswrapper[4805]: I0217 01:36:19.785476 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:36:19 crc kubenswrapper[4805]: E0217 01:36:19.786291 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:36:22 crc kubenswrapper[4805]: E0217 01:36:22.787282 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:36:30 crc kubenswrapper[4805]: I0217 01:36:30.786003 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:36:30 crc kubenswrapper[4805]: E0217 01:36:30.787538 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:36:30 crc kubenswrapper[4805]: E0217 01:36:30.788871 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:36:34 crc kubenswrapper[4805]: E0217 01:36:34.804288 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:36:42 crc kubenswrapper[4805]: I0217 01:36:42.785474 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:36:42 crc kubenswrapper[4805]: E0217 01:36:42.786275 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:36:44 crc kubenswrapper[4805]: E0217 01:36:44.814393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:36:48 crc kubenswrapper[4805]: E0217 01:36:48.788135 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:36:55 crc kubenswrapper[4805]: E0217 01:36:55.788628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:36:57 crc kubenswrapper[4805]: I0217 01:36:57.784980 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:36:57 crc kubenswrapper[4805]: E0217 01:36:57.785920 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:37:00 crc kubenswrapper[4805]: E0217 01:37:00.787937 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:37:09 crc kubenswrapper[4805]: E0217 01:37:09.787975 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:37:11 crc kubenswrapper[4805]: I0217 01:37:11.785040 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:37:11 crc kubenswrapper[4805]: E0217 01:37:11.786064 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:37:15 crc kubenswrapper[4805]: E0217 01:37:15.788041 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:37:22 crc kubenswrapper[4805]: E0217 01:37:22.789144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:37:26 crc kubenswrapper[4805]: I0217 01:37:26.784873 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:37:26 crc kubenswrapper[4805]: E0217 01:37:26.785558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:37:28 crc kubenswrapper[4805]: E0217 01:37:28.788566 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:37:37 crc kubenswrapper[4805]: E0217 01:37:37.788119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:37:40 crc kubenswrapper[4805]: I0217 01:37:40.785381 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:37:40 crc kubenswrapper[4805]: E0217 01:37:40.785895 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:37:40 crc kubenswrapper[4805]: E0217 01:37:40.788261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:37:51 crc kubenswrapper[4805]: E0217 01:37:51.787496 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:37:51 crc kubenswrapper[4805]: E0217 01:37:51.787859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:37:53 crc kubenswrapper[4805]: I0217 01:37:53.785656 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:37:53 crc kubenswrapper[4805]: E0217 01:37:53.787966 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:38:03 crc kubenswrapper[4805]: E0217 01:38:03.787542 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:38:05 crc kubenswrapper[4805]: I0217 01:38:05.784434 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:38:05 crc kubenswrapper[4805]: E0217 01:38:05.784882 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:38:06 crc kubenswrapper[4805]: E0217 01:38:06.787309 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:38:17 crc kubenswrapper[4805]: E0217 01:38:17.788454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:38:18 crc kubenswrapper[4805]: I0217 01:38:18.785420 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:38:18 crc kubenswrapper[4805]: E0217 01:38:18.786309 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:38:18 crc kubenswrapper[4805]: E0217 01:38:18.788027 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:38:29 crc kubenswrapper[4805]: E0217 01:38:29.786234 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:38:31 crc kubenswrapper[4805]: E0217 01:38:31.787612 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:38:33 crc kubenswrapper[4805]: I0217 01:38:33.785260 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:38:33 crc kubenswrapper[4805]: E0217 01:38:33.786473 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:38:40 crc kubenswrapper[4805]: E0217 01:38:40.787529 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:38:43 crc kubenswrapper[4805]: E0217 01:38:43.788645 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:38:45 crc kubenswrapper[4805]: I0217 01:38:45.785271 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:38:45 crc kubenswrapper[4805]: E0217 01:38:45.786252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:38:53 crc kubenswrapper[4805]: E0217 01:38:53.787174 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:38:57 crc kubenswrapper[4805]: I0217 01:38:57.785646 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:38:58 crc kubenswrapper[4805]: I0217 01:38:58.725939 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4"} Feb 17 01:38:58 crc kubenswrapper[4805]: E0217 01:38:58.789719 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:39:05 crc kubenswrapper[4805]: E0217 01:39:05.788207 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:39:12 crc kubenswrapper[4805]: E0217 01:39:12.787546 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:39:19 crc kubenswrapper[4805]: I0217 01:39:19.788107 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:39:19 crc kubenswrapper[4805]: E0217 01:39:19.916869 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:39:19 crc kubenswrapper[4805]: E0217 01:39:19.916965 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:39:19 crc kubenswrapper[4805]: E0217 01:39:19.917191 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:39:19 crc kubenswrapper[4805]: E0217 01:39:19.918458 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:39:26 crc kubenswrapper[4805]: E0217 01:39:26.911960 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:39:26 crc kubenswrapper[4805]: E0217 01:39:26.912482 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:39:26 crc kubenswrapper[4805]: E0217 01:39:26.912588 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:39:26 crc kubenswrapper[4805]: E0217 01:39:26.913893 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:39:30 crc kubenswrapper[4805]: E0217 01:39:30.788975 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:39:37 crc kubenswrapper[4805]: E0217 01:39:37.788712 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:39:45 crc kubenswrapper[4805]: E0217 01:39:45.787478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:39:50 crc kubenswrapper[4805]: E0217 01:39:50.789065 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:39:56 crc kubenswrapper[4805]: E0217 01:39:56.788271 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:40:03 crc kubenswrapper[4805]: E0217 01:40:03.788455 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:40:09 crc kubenswrapper[4805]: E0217 01:40:09.787993 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:40:18 crc kubenswrapper[4805]: E0217 01:40:18.788864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:40:22 crc kubenswrapper[4805]: I0217 01:40:22.158283 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f85b021d-db5c-4716-b94f-2198c439c614" containerName="galera" probeResult="failure" output="command timed out" Feb 17 01:40:24 crc kubenswrapper[4805]: E0217 01:40:24.797978 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:40:30 crc kubenswrapper[4805]: E0217 01:40:30.787860 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:40:39 crc kubenswrapper[4805]: E0217 01:40:39.795674 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.745800 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:41 crc kubenswrapper[4805]: E0217 01:40:41.746861 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="extract-content" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.746885 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="extract-content" Feb 17 01:40:41 crc kubenswrapper[4805]: E0217 01:40:41.746908 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="extract-utilities" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.746922 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="extract-utilities" Feb 17 01:40:41 crc kubenswrapper[4805]: E0217 01:40:41.746940 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="registry-server" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.746953 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="registry-server" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.747372 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="08802fa9-c60f-49bc-a71b-64491dfad8d3" containerName="registry-server" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.749899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.766829 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.902872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsc2j\" (UniqueName: \"kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.902993 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:41 crc kubenswrapper[4805]: I0217 01:40:41.903084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.005288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsc2j\" (UniqueName: \"kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.005490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.005631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.006101 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.006223 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.028829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsc2j\" (UniqueName: \"kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j\") pod \"certified-operators-l5j2b\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.115647 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:42 crc kubenswrapper[4805]: I0217 01:40:42.683487 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:42 crc kubenswrapper[4805]: E0217 01:40:42.787377 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:40:43 crc kubenswrapper[4805]: I0217 01:40:43.179641 4805 generic.go:334] "Generic (PLEG): container finished" podID="7e9d834d-b360-44ad-be26-65cebde33a70" containerID="8e47b3472e848ea85f830a5afd1bd5f2bcbaa88ac1416e8de62a97f2ffc0e7d3" exitCode=0 Feb 17 01:40:43 crc kubenswrapper[4805]: I0217 01:40:43.179774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerDied","Data":"8e47b3472e848ea85f830a5afd1bd5f2bcbaa88ac1416e8de62a97f2ffc0e7d3"} Feb 17 01:40:43 crc kubenswrapper[4805]: I0217 01:40:43.179844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerStarted","Data":"9a1a7c5d96da0acb7d727585b08c77be1df36f922da520bf128f2c822932d498"} Feb 17 01:40:44 crc kubenswrapper[4805]: I0217 01:40:44.196380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerStarted","Data":"e88fca62a752a095d8007a58c578137c9f0bdf12ceeae7ad026ef190f7a9ebc8"} Feb 17 01:40:45 crc kubenswrapper[4805]: I0217 01:40:45.213900 4805 generic.go:334] "Generic (PLEG): container finished" podID="7e9d834d-b360-44ad-be26-65cebde33a70" containerID="e88fca62a752a095d8007a58c578137c9f0bdf12ceeae7ad026ef190f7a9ebc8" exitCode=0 Feb 17 01:40:45 crc kubenswrapper[4805]: I0217 01:40:45.213991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerDied","Data":"e88fca62a752a095d8007a58c578137c9f0bdf12ceeae7ad026ef190f7a9ebc8"} Feb 17 01:40:46 crc kubenswrapper[4805]: I0217 01:40:46.230970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerStarted","Data":"893d835d587e956dea12caca9beb15cd257d69394b22f86ad6b4518df5b01ac2"} Feb 17 01:40:46 crc kubenswrapper[4805]: I0217 01:40:46.260209 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l5j2b" podStartSLOduration=2.82707808 podStartE2EDuration="5.260179771s" podCreationTimestamp="2026-02-17 01:40:41 +0000 UTC" firstStartedPulling="2026-02-17 01:40:43.183018313 +0000 UTC m=+4669.198827721" lastFinishedPulling="2026-02-17 01:40:45.616120014 +0000 UTC m=+4671.631929412" observedRunningTime="2026-02-17 01:40:46.255298375 +0000 UTC m=+4672.271107813" watchObservedRunningTime="2026-02-17 01:40:46.260179771 +0000 UTC m=+4672.275989199" Feb 17 01:40:52 crc kubenswrapper[4805]: I0217 01:40:52.116211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:52 crc kubenswrapper[4805]: I0217 01:40:52.116736 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:52 crc kubenswrapper[4805]: I0217 01:40:52.429649 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:52 crc kubenswrapper[4805]: I0217 01:40:52.502468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:52 crc kubenswrapper[4805]: I0217 01:40:52.676884 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:54 crc kubenswrapper[4805]: I0217 01:40:54.349479 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l5j2b" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="registry-server" containerID="cri-o://893d835d587e956dea12caca9beb15cd257d69394b22f86ad6b4518df5b01ac2" gracePeriod=2 Feb 17 01:40:54 crc kubenswrapper[4805]: E0217 01:40:54.805131 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:40:54 crc kubenswrapper[4805]: E0217 01:40:54.805587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.361768 4805 generic.go:334] "Generic (PLEG): container finished" podID="7e9d834d-b360-44ad-be26-65cebde33a70" containerID="893d835d587e956dea12caca9beb15cd257d69394b22f86ad6b4518df5b01ac2" exitCode=0 Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.362082 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerDied","Data":"893d835d587e956dea12caca9beb15cd257d69394b22f86ad6b4518df5b01ac2"} Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.487465 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.553440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content\") pod \"7e9d834d-b360-44ad-be26-65cebde33a70\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.553567 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities\") pod \"7e9d834d-b360-44ad-be26-65cebde33a70\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.553764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsc2j\" (UniqueName: \"kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j\") pod \"7e9d834d-b360-44ad-be26-65cebde33a70\" (UID: \"7e9d834d-b360-44ad-be26-65cebde33a70\") " Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.554601 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities" (OuterVolumeSpecName: "utilities") pod "7e9d834d-b360-44ad-be26-65cebde33a70" (UID: "7e9d834d-b360-44ad-be26-65cebde33a70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.563728 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j" (OuterVolumeSpecName: "kube-api-access-fsc2j") pod "7e9d834d-b360-44ad-be26-65cebde33a70" (UID: "7e9d834d-b360-44ad-be26-65cebde33a70"). InnerVolumeSpecName "kube-api-access-fsc2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.644014 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e9d834d-b360-44ad-be26-65cebde33a70" (UID: "7e9d834d-b360-44ad-be26-65cebde33a70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.658885 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsc2j\" (UniqueName: \"kubernetes.io/projected/7e9d834d-b360-44ad-be26-65cebde33a70-kube-api-access-fsc2j\") on node \"crc\" DevicePath \"\"" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.659084 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:40:55 crc kubenswrapper[4805]: I0217 01:40:55.659172 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9d834d-b360-44ad-be26-65cebde33a70-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.379801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l5j2b" event={"ID":"7e9d834d-b360-44ad-be26-65cebde33a70","Type":"ContainerDied","Data":"9a1a7c5d96da0acb7d727585b08c77be1df36f922da520bf128f2c822932d498"} Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.379874 4805 scope.go:117] "RemoveContainer" containerID="893d835d587e956dea12caca9beb15cd257d69394b22f86ad6b4518df5b01ac2" Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.380053 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l5j2b" Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.411635 4805 scope.go:117] "RemoveContainer" containerID="e88fca62a752a095d8007a58c578137c9f0bdf12ceeae7ad026ef190f7a9ebc8" Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.452550 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.464074 4805 scope.go:117] "RemoveContainer" containerID="8e47b3472e848ea85f830a5afd1bd5f2bcbaa88ac1416e8de62a97f2ffc0e7d3" Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.473465 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l5j2b"] Feb 17 01:40:56 crc kubenswrapper[4805]: I0217 01:40:56.801630 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" path="/var/lib/kubelet/pods/7e9d834d-b360-44ad-be26-65cebde33a70/volumes" Feb 17 01:41:07 crc kubenswrapper[4805]: E0217 01:41:07.788645 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:41:09 crc kubenswrapper[4805]: E0217 01:41:09.786800 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:41:19 crc kubenswrapper[4805]: E0217 01:41:19.788254 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:41:23 crc kubenswrapper[4805]: I0217 01:41:23.077453 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:41:23 crc kubenswrapper[4805]: I0217 01:41:23.078088 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:41:24 crc kubenswrapper[4805]: E0217 01:41:24.797688 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:41:33 crc kubenswrapper[4805]: E0217 01:41:33.788489 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.412483 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:35 crc kubenswrapper[4805]: E0217 01:41:35.413749 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="extract-utilities" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.413795 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="extract-utilities" Feb 17 01:41:35 crc kubenswrapper[4805]: E0217 01:41:35.413847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="registry-server" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.413868 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="registry-server" Feb 17 01:41:35 crc kubenswrapper[4805]: E0217 01:41:35.413903 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="extract-content" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.413923 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="extract-content" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.414413 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9d834d-b360-44ad-be26-65cebde33a70" containerName="registry-server" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.417636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.425645 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.464316 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.464546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kppg5\" (UniqueName: \"kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.464744 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.567473 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.567739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kppg5\" (UniqueName: \"kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.567897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.568077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.568403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.594437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kppg5\" (UniqueName: \"kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5\") pod \"community-operators-bk2cx\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: I0217 01:41:35.751254 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:35 crc kubenswrapper[4805]: E0217 01:41:35.788864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:41:36 crc kubenswrapper[4805]: I0217 01:41:36.299781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:36 crc kubenswrapper[4805]: I0217 01:41:36.864350 4805 generic.go:334] "Generic (PLEG): container finished" podID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerID="9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a" exitCode=0 Feb 17 01:41:36 crc kubenswrapper[4805]: I0217 01:41:36.864428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerDied","Data":"9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a"} Feb 17 01:41:36 crc kubenswrapper[4805]: I0217 01:41:36.864651 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerStarted","Data":"0da810703ba0e9241f3acac8d2be3b6795f14c7936a956816e94ddda31ef4063"} Feb 17 01:41:38 crc kubenswrapper[4805]: I0217 01:41:38.898945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerStarted","Data":"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c"} Feb 17 01:41:39 crc kubenswrapper[4805]: I0217 01:41:39.914116 4805 generic.go:334] "Generic (PLEG): container finished" podID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerID="63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c" exitCode=0 Feb 17 01:41:39 crc kubenswrapper[4805]: I0217 01:41:39.914182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerDied","Data":"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c"} Feb 17 01:41:40 crc kubenswrapper[4805]: I0217 01:41:40.934270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerStarted","Data":"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc"} Feb 17 01:41:40 crc kubenswrapper[4805]: I0217 01:41:40.963866 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bk2cx" podStartSLOduration=2.5264429760000002 podStartE2EDuration="5.963840169s" podCreationTimestamp="2026-02-17 01:41:35 +0000 UTC" firstStartedPulling="2026-02-17 01:41:36.866719432 +0000 UTC m=+4722.882528830" lastFinishedPulling="2026-02-17 01:41:40.304116615 +0000 UTC m=+4726.319926023" observedRunningTime="2026-02-17 01:41:40.952879254 +0000 UTC m=+4726.968688652" watchObservedRunningTime="2026-02-17 01:41:40.963840169 +0000 UTC m=+4726.979649597" Feb 17 01:41:45 crc kubenswrapper[4805]: I0217 01:41:45.752034 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:45 crc kubenswrapper[4805]: I0217 01:41:45.752521 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:45 crc kubenswrapper[4805]: I0217 01:41:45.813741 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:46 crc kubenswrapper[4805]: I0217 01:41:46.046178 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:46 crc kubenswrapper[4805]: I0217 01:41:46.114767 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:46 crc kubenswrapper[4805]: E0217 01:41:46.788426 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:41:47 crc kubenswrapper[4805]: E0217 01:41:47.787011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.011767 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bk2cx" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="registry-server" containerID="cri-o://bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc" gracePeriod=2 Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.842288 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.994191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kppg5\" (UniqueName: \"kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5\") pod \"478b4f2f-b96e-43ef-824b-7136016c1f41\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.994413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities\") pod \"478b4f2f-b96e-43ef-824b-7136016c1f41\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.994491 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content\") pod \"478b4f2f-b96e-43ef-824b-7136016c1f41\" (UID: \"478b4f2f-b96e-43ef-824b-7136016c1f41\") " Feb 17 01:41:48 crc kubenswrapper[4805]: I0217 01:41:48.996224 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities" (OuterVolumeSpecName: "utilities") pod "478b4f2f-b96e-43ef-824b-7136016c1f41" (UID: "478b4f2f-b96e-43ef-824b-7136016c1f41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.008305 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5" (OuterVolumeSpecName: "kube-api-access-kppg5") pod "478b4f2f-b96e-43ef-824b-7136016c1f41" (UID: "478b4f2f-b96e-43ef-824b-7136016c1f41"). InnerVolumeSpecName "kube-api-access-kppg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.027558 4805 generic.go:334] "Generic (PLEG): container finished" podID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerID="bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc" exitCode=0 Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.027606 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerDied","Data":"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc"} Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.027642 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk2cx" event={"ID":"478b4f2f-b96e-43ef-824b-7136016c1f41","Type":"ContainerDied","Data":"0da810703ba0e9241f3acac8d2be3b6795f14c7936a956816e94ddda31ef4063"} Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.027692 4805 scope.go:117] "RemoveContainer" containerID="bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.027704 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk2cx" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.059869 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "478b4f2f-b96e-43ef-824b-7136016c1f41" (UID: "478b4f2f-b96e-43ef-824b-7136016c1f41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.095621 4805 scope.go:117] "RemoveContainer" containerID="63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.098811 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kppg5\" (UniqueName: \"kubernetes.io/projected/478b4f2f-b96e-43ef-824b-7136016c1f41-kube-api-access-kppg5\") on node \"crc\" DevicePath \"\"" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.098843 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.098858 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478b4f2f-b96e-43ef-824b-7136016c1f41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.128536 4805 scope.go:117] "RemoveContainer" containerID="9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.188340 4805 scope.go:117] "RemoveContainer" containerID="bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc" Feb 17 01:41:49 crc kubenswrapper[4805]: E0217 01:41:49.189109 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc\": container with ID starting with bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc not found: ID does not exist" containerID="bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.189172 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc"} err="failed to get container status \"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc\": rpc error: code = NotFound desc = could not find container \"bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc\": container with ID starting with bcf5543fcbf427230744a5d22812f7b699a3b19ac88ebc37be814777806be2bc not found: ID does not exist" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.189361 4805 scope.go:117] "RemoveContainer" containerID="63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c" Feb 17 01:41:49 crc kubenswrapper[4805]: E0217 01:41:49.189715 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c\": container with ID starting with 63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c not found: ID does not exist" containerID="63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.189760 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c"} err="failed to get container status \"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c\": rpc error: code = NotFound desc = could not find container \"63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c\": container with ID starting with 63351b7a6cc28eecd5f292283330ba005436ba2fdaab7ab01cf9c734e26d967c not found: ID does not exist" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.189781 4805 scope.go:117] "RemoveContainer" containerID="9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a" Feb 17 01:41:49 crc kubenswrapper[4805]: E0217 01:41:49.190339 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a\": container with ID starting with 9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a not found: ID does not exist" containerID="9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.190480 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a"} err="failed to get container status \"9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a\": rpc error: code = NotFound desc = could not find container \"9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a\": container with ID starting with 9da0a076df4a917c463b48c470bffe982f8c3a2777e958901131d5dd754bb09a not found: ID does not exist" Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.385795 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:49 crc kubenswrapper[4805]: I0217 01:41:49.403600 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bk2cx"] Feb 17 01:41:50 crc kubenswrapper[4805]: I0217 01:41:50.796224 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" path="/var/lib/kubelet/pods/478b4f2f-b96e-43ef-824b-7136016c1f41/volumes" Feb 17 01:41:53 crc kubenswrapper[4805]: I0217 01:41:53.077740 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:41:53 crc kubenswrapper[4805]: I0217 01:41:53.079529 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:41:57 crc kubenswrapper[4805]: E0217 01:41:57.787498 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.745330 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:01 crc kubenswrapper[4805]: E0217 01:42:01.746524 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="registry-server" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.746551 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="registry-server" Feb 17 01:42:01 crc kubenswrapper[4805]: E0217 01:42:01.746603 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="extract-utilities" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.746615 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="extract-utilities" Feb 17 01:42:01 crc kubenswrapper[4805]: E0217 01:42:01.746642 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="extract-content" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.746654 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="extract-content" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.747016 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="478b4f2f-b96e-43ef-824b-7136016c1f41" containerName="registry-server" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.754453 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.787020 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:01 crc kubenswrapper[4805]: E0217 01:42:01.791387 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.839221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.839312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.839452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62xz\" (UniqueName: \"kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.941633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.941719 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.941761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n62xz\" (UniqueName: \"kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.942635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.942663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:01 crc kubenswrapper[4805]: I0217 01:42:01.966758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n62xz\" (UniqueName: \"kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz\") pod \"redhat-marketplace-2phjj\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:02 crc kubenswrapper[4805]: I0217 01:42:02.078365 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:02 crc kubenswrapper[4805]: I0217 01:42:02.648501 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:03 crc kubenswrapper[4805]: I0217 01:42:03.200043 4805 generic.go:334] "Generic (PLEG): container finished" podID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerID="58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2" exitCode=0 Feb 17 01:42:03 crc kubenswrapper[4805]: I0217 01:42:03.200209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerDied","Data":"58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2"} Feb 17 01:42:03 crc kubenswrapper[4805]: I0217 01:42:03.200427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerStarted","Data":"b01f4767d7285e536bcbc01b943dd8b2b165416f743cf9e8225e8d4109a4887f"} Feb 17 01:42:04 crc kubenswrapper[4805]: I0217 01:42:04.212390 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerStarted","Data":"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992"} Feb 17 01:42:05 crc kubenswrapper[4805]: I0217 01:42:05.223691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerDied","Data":"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992"} Feb 17 01:42:05 crc kubenswrapper[4805]: I0217 01:42:05.224415 4805 generic.go:334] "Generic (PLEG): container finished" podID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerID="5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992" exitCode=0 Feb 17 01:42:06 crc kubenswrapper[4805]: I0217 01:42:06.239793 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerStarted","Data":"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391"} Feb 17 01:42:06 crc kubenswrapper[4805]: I0217 01:42:06.269983 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2phjj" podStartSLOduration=2.764472932 podStartE2EDuration="5.26996112s" podCreationTimestamp="2026-02-17 01:42:01 +0000 UTC" firstStartedPulling="2026-02-17 01:42:03.202082651 +0000 UTC m=+4749.217892049" lastFinishedPulling="2026-02-17 01:42:05.707570799 +0000 UTC m=+4751.723380237" observedRunningTime="2026-02-17 01:42:06.260013403 +0000 UTC m=+4752.275822841" watchObservedRunningTime="2026-02-17 01:42:06.26996112 +0000 UTC m=+4752.285770528" Feb 17 01:42:12 crc kubenswrapper[4805]: I0217 01:42:12.080007 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:12 crc kubenswrapper[4805]: I0217 01:42:12.080842 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:12 crc kubenswrapper[4805]: I0217 01:42:12.333074 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:12 crc kubenswrapper[4805]: E0217 01:42:12.789630 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:42:12 crc kubenswrapper[4805]: I0217 01:42:12.910073 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:12 crc kubenswrapper[4805]: I0217 01:42:12.976902 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:14 crc kubenswrapper[4805]: I0217 01:42:14.844048 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2phjj" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="registry-server" containerID="cri-o://536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391" gracePeriod=2 Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.384759 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.472412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62xz\" (UniqueName: \"kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz\") pod \"fe1d68bb-ef39-4c93-b567-7b69337912c9\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.472562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities\") pod \"fe1d68bb-ef39-4c93-b567-7b69337912c9\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.472635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content\") pod \"fe1d68bb-ef39-4c93-b567-7b69337912c9\" (UID: \"fe1d68bb-ef39-4c93-b567-7b69337912c9\") " Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.474107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities" (OuterVolumeSpecName: "utilities") pod "fe1d68bb-ef39-4c93-b567-7b69337912c9" (UID: "fe1d68bb-ef39-4c93-b567-7b69337912c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.478613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz" (OuterVolumeSpecName: "kube-api-access-n62xz") pod "fe1d68bb-ef39-4c93-b567-7b69337912c9" (UID: "fe1d68bb-ef39-4c93-b567-7b69337912c9"). InnerVolumeSpecName "kube-api-access-n62xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.508259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe1d68bb-ef39-4c93-b567-7b69337912c9" (UID: "fe1d68bb-ef39-4c93-b567-7b69337912c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.575544 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n62xz\" (UniqueName: \"kubernetes.io/projected/fe1d68bb-ef39-4c93-b567-7b69337912c9-kube-api-access-n62xz\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.575576 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.575585 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe1d68bb-ef39-4c93-b567-7b69337912c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:15 crc kubenswrapper[4805]: E0217 01:42:15.787832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.856962 4805 generic.go:334] "Generic (PLEG): container finished" podID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerID="536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391" exitCode=0 Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.857047 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2phjj" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.858353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerDied","Data":"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391"} Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.858491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2phjj" event={"ID":"fe1d68bb-ef39-4c93-b567-7b69337912c9","Type":"ContainerDied","Data":"b01f4767d7285e536bcbc01b943dd8b2b165416f743cf9e8225e8d4109a4887f"} Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.858588 4805 scope.go:117] "RemoveContainer" containerID="536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.895797 4805 scope.go:117] "RemoveContainer" containerID="5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.902477 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.915529 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2phjj"] Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.936130 4805 scope.go:117] "RemoveContainer" containerID="58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.995394 4805 scope.go:117] "RemoveContainer" containerID="536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391" Feb 17 01:42:15 crc kubenswrapper[4805]: E0217 01:42:15.996137 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391\": container with ID starting with 536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391 not found: ID does not exist" containerID="536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.996183 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391"} err="failed to get container status \"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391\": rpc error: code = NotFound desc = could not find container \"536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391\": container with ID starting with 536b90de6feff55822b3bb9790eb426e7ba77ae762c9de95167768c4cca58391 not found: ID does not exist" Feb 17 01:42:15 crc kubenswrapper[4805]: I0217 01:42:15.996209 4805 scope.go:117] "RemoveContainer" containerID="5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992" Feb 17 01:42:16 crc kubenswrapper[4805]: E0217 01:42:16.003462 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992\": container with ID starting with 5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992 not found: ID does not exist" containerID="5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992" Feb 17 01:42:16 crc kubenswrapper[4805]: I0217 01:42:16.003508 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992"} err="failed to get container status \"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992\": rpc error: code = NotFound desc = could not find container \"5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992\": container with ID starting with 5c0fade86884a4e226cd7d26533409e5b5a1816322a1e3b797be06662978c992 not found: ID does not exist" Feb 17 01:42:16 crc kubenswrapper[4805]: I0217 01:42:16.003532 4805 scope.go:117] "RemoveContainer" containerID="58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2" Feb 17 01:42:16 crc kubenswrapper[4805]: E0217 01:42:16.004455 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2\": container with ID starting with 58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2 not found: ID does not exist" containerID="58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2" Feb 17 01:42:16 crc kubenswrapper[4805]: I0217 01:42:16.004478 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2"} err="failed to get container status \"58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2\": rpc error: code = NotFound desc = could not find container \"58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2\": container with ID starting with 58c65c98bb7dfe9c92c23e515e62ea03775de9b611d24cd7da86f7a39a8a63d2 not found: ID does not exist" Feb 17 01:42:16 crc kubenswrapper[4805]: I0217 01:42:16.811836 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" path="/var/lib/kubelet/pods/fe1d68bb-ef39-4c93-b567-7b69337912c9/volumes" Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.077835 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.078599 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.078670 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.079704 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.079820 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4" gracePeriod=600 Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.983086 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4" exitCode=0 Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.983675 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4"} Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.983896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525"} Feb 17 01:42:23 crc kubenswrapper[4805]: I0217 01:42:23.983933 4805 scope.go:117] "RemoveContainer" containerID="f8c27c39ac21d245db2a345b2dba5b77d54124120d1446b369c50a41922a4d0e" Feb 17 01:42:25 crc kubenswrapper[4805]: E0217 01:42:25.787316 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:42:28 crc kubenswrapper[4805]: E0217 01:42:28.788299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:42:36 crc kubenswrapper[4805]: E0217 01:42:36.790082 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.947044 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:36 crc kubenswrapper[4805]: E0217 01:42:36.947843 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="extract-content" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.947881 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="extract-content" Feb 17 01:42:36 crc kubenswrapper[4805]: E0217 01:42:36.947944 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="extract-utilities" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.947956 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="extract-utilities" Feb 17 01:42:36 crc kubenswrapper[4805]: E0217 01:42:36.947985 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="registry-server" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.947996 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="registry-server" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.948367 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d68bb-ef39-4c93-b567-7b69337912c9" containerName="registry-server" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.950938 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:36 crc kubenswrapper[4805]: I0217 01:42:36.975968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.001474 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.001535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ncw\" (UniqueName: \"kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.001790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.103490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.104107 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.104026 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.104256 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ncw\" (UniqueName: \"kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.104671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.127188 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ncw\" (UniqueName: \"kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw\") pod \"redhat-operators-k9l4s\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.274427 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:37 crc kubenswrapper[4805]: I0217 01:42:37.779565 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:38 crc kubenswrapper[4805]: I0217 01:42:38.148033 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerStarted","Data":"5084af16c2e1f85a75f6e804619d71c837daa4f5fd522ffbcbd3b72cfc902ac0"} Feb 17 01:42:38 crc kubenswrapper[4805]: E0217 01:42:38.737804 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f110b37_538b_405e_9885_feb330b794b7.slice/crio-4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f110b37_538b_405e_9885_feb330b794b7.slice/crio-conmon-4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62.scope\": RecentStats: unable to find data in memory cache]" Feb 17 01:42:39 crc kubenswrapper[4805]: I0217 01:42:39.159627 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f110b37-538b-405e-9885-feb330b794b7" containerID="4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62" exitCode=0 Feb 17 01:42:39 crc kubenswrapper[4805]: I0217 01:42:39.159775 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerDied","Data":"4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62"} Feb 17 01:42:40 crc kubenswrapper[4805]: I0217 01:42:40.173526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerStarted","Data":"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b"} Feb 17 01:42:40 crc kubenswrapper[4805]: E0217 01:42:40.789750 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:42:43 crc kubenswrapper[4805]: I0217 01:42:43.218956 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f110b37-538b-405e-9885-feb330b794b7" containerID="fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b" exitCode=0 Feb 17 01:42:43 crc kubenswrapper[4805]: I0217 01:42:43.219056 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerDied","Data":"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b"} Feb 17 01:42:45 crc kubenswrapper[4805]: I0217 01:42:45.242618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerStarted","Data":"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627"} Feb 17 01:42:45 crc kubenswrapper[4805]: I0217 01:42:45.271796 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9l4s" podStartSLOduration=4.22741196 podStartE2EDuration="9.271779507s" podCreationTimestamp="2026-02-17 01:42:36 +0000 UTC" firstStartedPulling="2026-02-17 01:42:39.161711985 +0000 UTC m=+4785.177521403" lastFinishedPulling="2026-02-17 01:42:44.206079522 +0000 UTC m=+4790.221888950" observedRunningTime="2026-02-17 01:42:45.264035131 +0000 UTC m=+4791.279844539" watchObservedRunningTime="2026-02-17 01:42:45.271779507 +0000 UTC m=+4791.287588905" Feb 17 01:42:47 crc kubenswrapper[4805]: I0217 01:42:47.275264 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:47 crc kubenswrapper[4805]: I0217 01:42:47.275536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:48 crc kubenswrapper[4805]: I0217 01:42:48.320157 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9l4s" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="registry-server" probeResult="failure" output=< Feb 17 01:42:48 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:42:48 crc kubenswrapper[4805]: > Feb 17 01:42:50 crc kubenswrapper[4805]: E0217 01:42:50.795562 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:42:54 crc kubenswrapper[4805]: E0217 01:42:54.863460 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:42:57 crc kubenswrapper[4805]: I0217 01:42:57.345898 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:57 crc kubenswrapper[4805]: I0217 01:42:57.400185 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:57 crc kubenswrapper[4805]: I0217 01:42:57.585731 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:58 crc kubenswrapper[4805]: I0217 01:42:58.413346 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9l4s" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="registry-server" containerID="cri-o://4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627" gracePeriod=2 Feb 17 01:42:58 crc kubenswrapper[4805]: I0217 01:42:58.993651 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.056973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities\") pod \"8f110b37-538b-405e-9885-feb330b794b7\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.057074 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ncw\" (UniqueName: \"kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw\") pod \"8f110b37-538b-405e-9885-feb330b794b7\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.057164 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content\") pod \"8f110b37-538b-405e-9885-feb330b794b7\" (UID: \"8f110b37-538b-405e-9885-feb330b794b7\") " Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.058322 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities" (OuterVolumeSpecName: "utilities") pod "8f110b37-538b-405e-9885-feb330b794b7" (UID: "8f110b37-538b-405e-9885-feb330b794b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.062538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw" (OuterVolumeSpecName: "kube-api-access-q2ncw") pod "8f110b37-538b-405e-9885-feb330b794b7" (UID: "8f110b37-538b-405e-9885-feb330b794b7"). InnerVolumeSpecName "kube-api-access-q2ncw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.159095 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.159124 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ncw\" (UniqueName: \"kubernetes.io/projected/8f110b37-538b-405e-9885-feb330b794b7-kube-api-access-q2ncw\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.194595 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f110b37-538b-405e-9885-feb330b794b7" (UID: "8f110b37-538b-405e-9885-feb330b794b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.260604 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f110b37-538b-405e-9885-feb330b794b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.429054 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f110b37-538b-405e-9885-feb330b794b7" containerID="4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627" exitCode=0 Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.429125 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9l4s" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.429141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerDied","Data":"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627"} Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.429870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9l4s" event={"ID":"8f110b37-538b-405e-9885-feb330b794b7","Type":"ContainerDied","Data":"5084af16c2e1f85a75f6e804619d71c837daa4f5fd522ffbcbd3b72cfc902ac0"} Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.429913 4805 scope.go:117] "RemoveContainer" containerID="4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.466853 4805 scope.go:117] "RemoveContainer" containerID="fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.478127 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.488642 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9l4s"] Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.495927 4805 scope.go:117] "RemoveContainer" containerID="4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.543507 4805 scope.go:117] "RemoveContainer" containerID="4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627" Feb 17 01:42:59 crc kubenswrapper[4805]: E0217 01:42:59.544288 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627\": container with ID starting with 4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627 not found: ID does not exist" containerID="4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.544350 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627"} err="failed to get container status \"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627\": rpc error: code = NotFound desc = could not find container \"4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627\": container with ID starting with 4d97612661b914b9e4cdea6aa0595a6e9253516b920e79aafe3dbd33ac17e627 not found: ID does not exist" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.544379 4805 scope.go:117] "RemoveContainer" containerID="fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b" Feb 17 01:42:59 crc kubenswrapper[4805]: E0217 01:42:59.544764 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b\": container with ID starting with fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b not found: ID does not exist" containerID="fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.544786 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b"} err="failed to get container status \"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b\": rpc error: code = NotFound desc = could not find container \"fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b\": container with ID starting with fcea16fd3d1f4b76df4922b40e78c87a2826cb4608acf9b4a70ad4c85ad84d2b not found: ID does not exist" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.544797 4805 scope.go:117] "RemoveContainer" containerID="4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62" Feb 17 01:42:59 crc kubenswrapper[4805]: E0217 01:42:59.545139 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62\": container with ID starting with 4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62 not found: ID does not exist" containerID="4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62" Feb 17 01:42:59 crc kubenswrapper[4805]: I0217 01:42:59.545158 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62"} err="failed to get container status \"4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62\": rpc error: code = NotFound desc = could not find container \"4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62\": container with ID starting with 4a8620dd41ef62d4184f0ceb06945224e5c73e8235b57f8636bc027ab8c6ff62 not found: ID does not exist" Feb 17 01:43:00 crc kubenswrapper[4805]: I0217 01:43:00.812231 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f110b37-538b-405e-9885-feb330b794b7" path="/var/lib/kubelet/pods/8f110b37-538b-405e-9885-feb330b794b7/volumes" Feb 17 01:43:02 crc kubenswrapper[4805]: E0217 01:43:02.786748 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:43:09 crc kubenswrapper[4805]: E0217 01:43:09.787000 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:43:13 crc kubenswrapper[4805]: E0217 01:43:13.787981 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:43:23 crc kubenswrapper[4805]: E0217 01:43:23.787995 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:43:27 crc kubenswrapper[4805]: E0217 01:43:27.787702 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:43:34 crc kubenswrapper[4805]: E0217 01:43:34.811376 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:43:42 crc kubenswrapper[4805]: E0217 01:43:42.789544 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:43:47 crc kubenswrapper[4805]: E0217 01:43:47.790404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:43:55 crc kubenswrapper[4805]: E0217 01:43:55.788666 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:44:02 crc kubenswrapper[4805]: E0217 01:44:02.788427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:44:09 crc kubenswrapper[4805]: E0217 01:44:09.787823 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:44:17 crc kubenswrapper[4805]: E0217 01:44:17.787587 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:44:23 crc kubenswrapper[4805]: I0217 01:44:23.077410 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:44:23 crc kubenswrapper[4805]: I0217 01:44:23.078050 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:44:24 crc kubenswrapper[4805]: E0217 01:44:24.813765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:44:32 crc kubenswrapper[4805]: I0217 01:44:32.788051 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:44:32 crc kubenswrapper[4805]: E0217 01:44:32.884803 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:44:32 crc kubenswrapper[4805]: E0217 01:44:32.884880 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:44:32 crc kubenswrapper[4805]: E0217 01:44:32.885062 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:44:32 crc kubenswrapper[4805]: E0217 01:44:32.886314 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:44:35 crc kubenswrapper[4805]: E0217 01:44:35.900875 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:44:35 crc kubenswrapper[4805]: E0217 01:44:35.901743 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:44:35 crc kubenswrapper[4805]: E0217 01:44:35.901968 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:44:35 crc kubenswrapper[4805]: E0217 01:44:35.905578 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:44:47 crc kubenswrapper[4805]: E0217 01:44:47.788500 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:44:49 crc kubenswrapper[4805]: I0217 01:44:49.983759 4805 trace.go:236] Trace[336069938]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (17-Feb-2026 01:44:48.972) (total time: 1011ms): Feb 17 01:44:49 crc kubenswrapper[4805]: Trace[336069938]: [1.011343127s] [1.011343127s] END Feb 17 01:44:50 crc kubenswrapper[4805]: E0217 01:44:50.787528 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:44:53 crc kubenswrapper[4805]: I0217 01:44:53.076723 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:44:53 crc kubenswrapper[4805]: I0217 01:44:53.077108 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.210453 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh"] Feb 17 01:45:00 crc kubenswrapper[4805]: E0217 01:45:00.211613 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="extract-utilities" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.211634 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="extract-utilities" Feb 17 01:45:00 crc kubenswrapper[4805]: E0217 01:45:00.211656 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="registry-server" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.211664 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="registry-server" Feb 17 01:45:00 crc kubenswrapper[4805]: E0217 01:45:00.211678 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="extract-content" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.211686 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="extract-content" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.211971 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f110b37-538b-405e-9885-feb330b794b7" containerName="registry-server" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.212981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.217053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.217295 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.221648 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh"] Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.371982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thk8\" (UniqueName: \"kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.372301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.372389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.474503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.474579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.474675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thk8\" (UniqueName: \"kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.475524 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.489535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.495474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thk8\" (UniqueName: \"kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8\") pod \"collect-profiles-29521545-k4dfh\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:00 crc kubenswrapper[4805]: I0217 01:45:00.544446 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:01 crc kubenswrapper[4805]: I0217 01:45:01.067480 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh"] Feb 17 01:45:01 crc kubenswrapper[4805]: I0217 01:45:01.398533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" event={"ID":"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6","Type":"ContainerStarted","Data":"0a19ce5b2c8d5462a8431d2170eae8c665590a9e7009c44b86bf432972696a0c"} Feb 17 01:45:01 crc kubenswrapper[4805]: I0217 01:45:01.398860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" event={"ID":"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6","Type":"ContainerStarted","Data":"9b94048e34953e95497fde84bb03634eea15dedd3ecc08cf70945a51c4f27eca"} Feb 17 01:45:01 crc kubenswrapper[4805]: I0217 01:45:01.421974 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" podStartSLOduration=1.42195679 podStartE2EDuration="1.42195679s" podCreationTimestamp="2026-02-17 01:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 01:45:01.416109686 +0000 UTC m=+4927.431919104" watchObservedRunningTime="2026-02-17 01:45:01.42195679 +0000 UTC m=+4927.437766188" Feb 17 01:45:01 crc kubenswrapper[4805]: E0217 01:45:01.788638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:45:02 crc kubenswrapper[4805]: I0217 01:45:02.409761 4805 generic.go:334] "Generic (PLEG): container finished" podID="6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" containerID="0a19ce5b2c8d5462a8431d2170eae8c665590a9e7009c44b86bf432972696a0c" exitCode=0 Feb 17 01:45:02 crc kubenswrapper[4805]: I0217 01:45:02.409816 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" event={"ID":"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6","Type":"ContainerDied","Data":"0a19ce5b2c8d5462a8431d2170eae8c665590a9e7009c44b86bf432972696a0c"} Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.897995 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.961655 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume\") pod \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.962078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thk8\" (UniqueName: \"kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8\") pod \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.962440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume\") pod \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\" (UID: \"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6\") " Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.962894 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" (UID: "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.963533 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.984521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" (UID: "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 01:45:03 crc kubenswrapper[4805]: I0217 01:45:03.984819 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8" (OuterVolumeSpecName: "kube-api-access-4thk8") pod "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" (UID: "6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6"). InnerVolumeSpecName "kube-api-access-4thk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.065906 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thk8\" (UniqueName: \"kubernetes.io/projected/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-kube-api-access-4thk8\") on node \"crc\" DevicePath \"\"" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.065939 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.436208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" event={"ID":"6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6","Type":"ContainerDied","Data":"9b94048e34953e95497fde84bb03634eea15dedd3ecc08cf70945a51c4f27eca"} Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.436268 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b94048e34953e95497fde84bb03634eea15dedd3ecc08cf70945a51c4f27eca" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.436304 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521545-k4dfh" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.533107 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b"] Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.549754 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521500-sz97b"] Feb 17 01:45:04 crc kubenswrapper[4805]: E0217 01:45:04.803073 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:45:04 crc kubenswrapper[4805]: I0217 01:45:04.821092 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7176d28-cd1d-455f-b31a-69211b464bf1" path="/var/lib/kubelet/pods/d7176d28-cd1d-455f-b31a-69211b464bf1/volumes" Feb 17 01:45:17 crc kubenswrapper[4805]: E0217 01:45:17.787404 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:45:19 crc kubenswrapper[4805]: E0217 01:45:19.787555 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:45:20 crc kubenswrapper[4805]: I0217 01:45:20.841270 4805 scope.go:117] "RemoveContainer" containerID="f9e09533c797f23ff2934b9ae5ca8a4036ab5de4d92decbeafceb6ed58ea1ec8" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.076885 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.077316 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.077458 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.079091 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.079221 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" gracePeriod=600 Feb 17 01:45:23 crc kubenswrapper[4805]: E0217 01:45:23.214625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.711006 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" exitCode=0 Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.711104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525"} Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.711202 4805 scope.go:117] "RemoveContainer" containerID="c8cc79b212c9b496269deac6ab82414bf335d0bbc5c5eb8163f2cba41c1cf7a4" Feb 17 01:45:23 crc kubenswrapper[4805]: I0217 01:45:23.711878 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:45:23 crc kubenswrapper[4805]: E0217 01:45:23.712151 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:45:29 crc kubenswrapper[4805]: E0217 01:45:29.791103 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:45:31 crc kubenswrapper[4805]: E0217 01:45:31.787698 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:45:37 crc kubenswrapper[4805]: I0217 01:45:37.785125 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:45:37 crc kubenswrapper[4805]: E0217 01:45:37.786144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:45:42 crc kubenswrapper[4805]: E0217 01:45:42.786974 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:45:42 crc kubenswrapper[4805]: E0217 01:45:42.788133 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:45:51 crc kubenswrapper[4805]: I0217 01:45:51.785203 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:45:51 crc kubenswrapper[4805]: E0217 01:45:51.786036 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:45:54 crc kubenswrapper[4805]: E0217 01:45:54.805084 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:45:57 crc kubenswrapper[4805]: E0217 01:45:57.788012 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:46:03 crc kubenswrapper[4805]: I0217 01:46:03.786184 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:46:03 crc kubenswrapper[4805]: E0217 01:46:03.787415 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:46:08 crc kubenswrapper[4805]: E0217 01:46:08.787466 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:46:10 crc kubenswrapper[4805]: E0217 01:46:10.792406 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:46:16 crc kubenswrapper[4805]: I0217 01:46:16.785004 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:46:16 crc kubenswrapper[4805]: E0217 01:46:16.786087 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:46:19 crc kubenswrapper[4805]: E0217 01:46:19.787439 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:46:22 crc kubenswrapper[4805]: E0217 01:46:22.787393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:46:28 crc kubenswrapper[4805]: I0217 01:46:28.785998 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:46:28 crc kubenswrapper[4805]: E0217 01:46:28.787146 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:46:32 crc kubenswrapper[4805]: E0217 01:46:32.788153 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:46:34 crc kubenswrapper[4805]: E0217 01:46:34.801003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:46:41 crc kubenswrapper[4805]: I0217 01:46:41.784414 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:46:41 crc kubenswrapper[4805]: E0217 01:46:41.785300 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:46:44 crc kubenswrapper[4805]: E0217 01:46:44.792565 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:46:48 crc kubenswrapper[4805]: E0217 01:46:48.787826 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:46:54 crc kubenswrapper[4805]: I0217 01:46:54.806793 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:46:54 crc kubenswrapper[4805]: E0217 01:46:54.808012 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:46:59 crc kubenswrapper[4805]: E0217 01:46:59.787218 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:47:01 crc kubenswrapper[4805]: E0217 01:47:01.785999 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:47:07 crc kubenswrapper[4805]: I0217 01:47:07.786234 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:47:07 crc kubenswrapper[4805]: E0217 01:47:07.787141 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:47:12 crc kubenswrapper[4805]: E0217 01:47:12.786427 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:47:12 crc kubenswrapper[4805]: E0217 01:47:12.786447 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:47:19 crc kubenswrapper[4805]: I0217 01:47:19.784742 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:47:19 crc kubenswrapper[4805]: E0217 01:47:19.785700 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:47:24 crc kubenswrapper[4805]: E0217 01:47:24.795808 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:47:26 crc kubenswrapper[4805]: E0217 01:47:26.788205 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:47:33 crc kubenswrapper[4805]: I0217 01:47:33.785389 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:47:33 crc kubenswrapper[4805]: E0217 01:47:33.786084 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:47:39 crc kubenswrapper[4805]: E0217 01:47:39.787392 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:47:41 crc kubenswrapper[4805]: E0217 01:47:41.788193 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:47:44 crc kubenswrapper[4805]: I0217 01:47:44.795151 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:47:44 crc kubenswrapper[4805]: E0217 01:47:44.795916 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:47:51 crc kubenswrapper[4805]: E0217 01:47:51.787844 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:47:55 crc kubenswrapper[4805]: E0217 01:47:55.788642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:47:58 crc kubenswrapper[4805]: I0217 01:47:58.785190 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:47:58 crc kubenswrapper[4805]: E0217 01:47:58.786050 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:48:06 crc kubenswrapper[4805]: E0217 01:48:06.787113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:48:06 crc kubenswrapper[4805]: E0217 01:48:06.787180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:48:13 crc kubenswrapper[4805]: I0217 01:48:13.784968 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:48:13 crc kubenswrapper[4805]: E0217 01:48:13.785559 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:48:17 crc kubenswrapper[4805]: E0217 01:48:17.787180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:48:21 crc kubenswrapper[4805]: E0217 01:48:21.787546 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:48:26 crc kubenswrapper[4805]: I0217 01:48:26.784973 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:48:26 crc kubenswrapper[4805]: E0217 01:48:26.785906 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:48:31 crc kubenswrapper[4805]: E0217 01:48:31.790162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:48:34 crc kubenswrapper[4805]: E0217 01:48:34.805622 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.052853 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gjqlt/must-gather-hvjjh"] Feb 17 01:48:35 crc kubenswrapper[4805]: E0217 01:48:35.053268 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" containerName="collect-profiles" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.053284 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" containerName="collect-profiles" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.053512 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbe9f63-f7f2-4c45-a9a2-b06031f6cce6" containerName="collect-profiles" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.054551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.056117 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjqlt"/"kube-root-ca.crt" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.057619 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gjqlt"/"openshift-service-ca.crt" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.058241 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gjqlt"/"default-dockercfg-9vcnk" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.082484 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjqlt/must-gather-hvjjh"] Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.227205 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9l4\" (UniqueName: \"kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.227271 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.330002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9l4\" (UniqueName: \"kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.330078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.330884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.354899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9l4\" (UniqueName: \"kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4\") pod \"must-gather-hvjjh\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.373089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:48:35 crc kubenswrapper[4805]: I0217 01:48:35.915032 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gjqlt/must-gather-hvjjh"] Feb 17 01:48:36 crc kubenswrapper[4805]: I0217 01:48:36.597871 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" event={"ID":"c521d6b8-b6fe-477e-84ac-db6f9a416901","Type":"ContainerStarted","Data":"a13560bf88f5c382635c51fe346ed7db088143c99f46efa04fccb7243b3fa285"} Feb 17 01:48:40 crc kubenswrapper[4805]: I0217 01:48:40.784500 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:48:40 crc kubenswrapper[4805]: E0217 01:48:40.785432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:48:43 crc kubenswrapper[4805]: I0217 01:48:43.675078 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" event={"ID":"c521d6b8-b6fe-477e-84ac-db6f9a416901","Type":"ContainerStarted","Data":"6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3"} Feb 17 01:48:43 crc kubenswrapper[4805]: I0217 01:48:43.675774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" event={"ID":"c521d6b8-b6fe-477e-84ac-db6f9a416901","Type":"ContainerStarted","Data":"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439"} Feb 17 01:48:43 crc kubenswrapper[4805]: I0217 01:48:43.703909 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" podStartSLOduration=2.094524325 podStartE2EDuration="8.703891288s" podCreationTimestamp="2026-02-17 01:48:35 +0000 UTC" firstStartedPulling="2026-02-17 01:48:35.920239823 +0000 UTC m=+5141.936049221" lastFinishedPulling="2026-02-17 01:48:42.529606786 +0000 UTC m=+5148.545416184" observedRunningTime="2026-02-17 01:48:43.694099345 +0000 UTC m=+5149.709908783" watchObservedRunningTime="2026-02-17 01:48:43.703891288 +0000 UTC m=+5149.719700696" Feb 17 01:48:43 crc kubenswrapper[4805]: E0217 01:48:43.787893 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:48:47 crc kubenswrapper[4805]: E0217 01:48:47.786227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.116965 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-jxzqq"] Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.118591 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.234373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9thhw\" (UniqueName: \"kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.234903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.337237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9thhw\" (UniqueName: \"kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.337479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.337663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.357428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9thhw\" (UniqueName: \"kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw\") pod \"crc-debug-jxzqq\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.442062 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:48:48 crc kubenswrapper[4805]: W0217 01:48:48.471286 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49fd0ce4_ac7d_4ee8_8100_eb51d9712b40.slice/crio-e62a8099c1fa6589fe578f791fd368ee5f6704bd053ff24fd564279a5833df33 WatchSource:0}: Error finding container e62a8099c1fa6589fe578f791fd368ee5f6704bd053ff24fd564279a5833df33: Status 404 returned error can't find the container with id e62a8099c1fa6589fe578f791fd368ee5f6704bd053ff24fd564279a5833df33 Feb 17 01:48:48 crc kubenswrapper[4805]: I0217 01:48:48.725543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" event={"ID":"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40","Type":"ContainerStarted","Data":"e62a8099c1fa6589fe578f791fd368ee5f6704bd053ff24fd564279a5833df33"} Feb 17 01:48:53 crc kubenswrapper[4805]: I0217 01:48:53.784651 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:48:53 crc kubenswrapper[4805]: E0217 01:48:53.785299 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:48:55 crc kubenswrapper[4805]: E0217 01:48:55.786723 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:48:58 crc kubenswrapper[4805]: E0217 01:48:58.790556 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:48:59 crc kubenswrapper[4805]: I0217 01:48:59.857562 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" event={"ID":"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40","Type":"ContainerStarted","Data":"33631b1de522847d9baee1ce7be64d6efe72c0836d0201a11b3b3e9fe2e6af56"} Feb 17 01:48:59 crc kubenswrapper[4805]: I0217 01:48:59.874269 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" podStartSLOduration=1.684897173 podStartE2EDuration="11.874249204s" podCreationTimestamp="2026-02-17 01:48:48 +0000 UTC" firstStartedPulling="2026-02-17 01:48:48.474133234 +0000 UTC m=+5154.489942632" lastFinishedPulling="2026-02-17 01:48:58.663485265 +0000 UTC m=+5164.679294663" observedRunningTime="2026-02-17 01:48:59.87301509 +0000 UTC m=+5165.888824488" watchObservedRunningTime="2026-02-17 01:48:59.874249204 +0000 UTC m=+5165.890058612" Feb 17 01:49:05 crc kubenswrapper[4805]: I0217 01:49:05.784542 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:49:05 crc kubenswrapper[4805]: E0217 01:49:05.785252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:49:10 crc kubenswrapper[4805]: E0217 01:49:10.823115 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:49:12 crc kubenswrapper[4805]: E0217 01:49:12.785625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:49:15 crc kubenswrapper[4805]: I0217 01:49:15.026613 4805 generic.go:334] "Generic (PLEG): container finished" podID="49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" containerID="33631b1de522847d9baee1ce7be64d6efe72c0836d0201a11b3b3e9fe2e6af56" exitCode=0 Feb 17 01:49:15 crc kubenswrapper[4805]: I0217 01:49:15.026795 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" event={"ID":"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40","Type":"ContainerDied","Data":"33631b1de522847d9baee1ce7be64d6efe72c0836d0201a11b3b3e9fe2e6af56"} Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.188902 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.234313 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-jxzqq"] Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.246728 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-jxzqq"] Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.248124 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9thhw\" (UniqueName: \"kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw\") pod \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.248211 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host\") pod \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\" (UID: \"49fd0ce4-ac7d-4ee8-8100-eb51d9712b40\") " Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.248295 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host" (OuterVolumeSpecName: "host") pod "49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" (UID: "49fd0ce4-ac7d-4ee8-8100-eb51d9712b40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.248843 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-host\") on node \"crc\" DevicePath \"\"" Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.254971 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw" (OuterVolumeSpecName: "kube-api-access-9thhw") pod "49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" (UID: "49fd0ce4-ac7d-4ee8-8100-eb51d9712b40"). InnerVolumeSpecName "kube-api-access-9thhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.349588 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9thhw\" (UniqueName: \"kubernetes.io/projected/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40-kube-api-access-9thhw\") on node \"crc\" DevicePath \"\"" Feb 17 01:49:16 crc kubenswrapper[4805]: I0217 01:49:16.799180 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" path="/var/lib/kubelet/pods/49fd0ce4-ac7d-4ee8-8100-eb51d9712b40/volumes" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.053889 4805 scope.go:117] "RemoveContainer" containerID="33631b1de522847d9baee1ce7be64d6efe72c0836d0201a11b3b3e9fe2e6af56" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.054027 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-jxzqq" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.478309 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-zckkh"] Feb 17 01:49:17 crc kubenswrapper[4805]: E0217 01:49:17.480531 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" containerName="container-00" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.480709 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" containerName="container-00" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.481241 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fd0ce4-ac7d-4ee8-8100-eb51d9712b40" containerName="container-00" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.482750 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.575523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmrk\" (UniqueName: \"kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.575629 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.677926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmrk\" (UniqueName: \"kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.678026 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.678234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.700921 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmrk\" (UniqueName: \"kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk\") pod \"crc-debug-zckkh\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: I0217 01:49:17.809973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:17 crc kubenswrapper[4805]: W0217 01:49:17.849647 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0392424_c102_4e07_a464_e32bd41da3ef.slice/crio-94e4853e32b8e3eef5900fdce91c9aad3d8a2aaed7ab96c2248ebf471da4267d WatchSource:0}: Error finding container 94e4853e32b8e3eef5900fdce91c9aad3d8a2aaed7ab96c2248ebf471da4267d: Status 404 returned error can't find the container with id 94e4853e32b8e3eef5900fdce91c9aad3d8a2aaed7ab96c2248ebf471da4267d Feb 17 01:49:18 crc kubenswrapper[4805]: I0217 01:49:18.086760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" event={"ID":"b0392424-c102-4e07-a464-e32bd41da3ef","Type":"ContainerStarted","Data":"94e4853e32b8e3eef5900fdce91c9aad3d8a2aaed7ab96c2248ebf471da4267d"} Feb 17 01:49:18 crc kubenswrapper[4805]: I0217 01:49:18.784601 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:49:18 crc kubenswrapper[4805]: E0217 01:49:18.785301 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:49:19 crc kubenswrapper[4805]: I0217 01:49:19.100984 4805 generic.go:334] "Generic (PLEG): container finished" podID="b0392424-c102-4e07-a464-e32bd41da3ef" containerID="94e18089a942b1057e3d02a09a4b694fbc453308986d6e16d48418e62de0d1f6" exitCode=1 Feb 17 01:49:19 crc kubenswrapper[4805]: I0217 01:49:19.101022 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" event={"ID":"b0392424-c102-4e07-a464-e32bd41da3ef","Type":"ContainerDied","Data":"94e18089a942b1057e3d02a09a4b694fbc453308986d6e16d48418e62de0d1f6"} Feb 17 01:49:19 crc kubenswrapper[4805]: I0217 01:49:19.138454 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-zckkh"] Feb 17 01:49:19 crc kubenswrapper[4805]: I0217 01:49:19.152553 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gjqlt/crc-debug-zckkh"] Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.211192 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.336785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host\") pod \"b0392424-c102-4e07-a464-e32bd41da3ef\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.336928 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host" (OuterVolumeSpecName: "host") pod "b0392424-c102-4e07-a464-e32bd41da3ef" (UID: "b0392424-c102-4e07-a464-e32bd41da3ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.336995 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqmrk\" (UniqueName: \"kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk\") pod \"b0392424-c102-4e07-a464-e32bd41da3ef\" (UID: \"b0392424-c102-4e07-a464-e32bd41da3ef\") " Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.338375 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b0392424-c102-4e07-a464-e32bd41da3ef-host\") on node \"crc\" DevicePath \"\"" Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.343622 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk" (OuterVolumeSpecName: "kube-api-access-cqmrk") pod "b0392424-c102-4e07-a464-e32bd41da3ef" (UID: "b0392424-c102-4e07-a464-e32bd41da3ef"). InnerVolumeSpecName "kube-api-access-cqmrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.441076 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqmrk\" (UniqueName: \"kubernetes.io/projected/b0392424-c102-4e07-a464-e32bd41da3ef-kube-api-access-cqmrk\") on node \"crc\" DevicePath \"\"" Feb 17 01:49:20 crc kubenswrapper[4805]: I0217 01:49:20.796484 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0392424-c102-4e07-a464-e32bd41da3ef" path="/var/lib/kubelet/pods/b0392424-c102-4e07-a464-e32bd41da3ef/volumes" Feb 17 01:49:21 crc kubenswrapper[4805]: I0217 01:49:21.123460 4805 scope.go:117] "RemoveContainer" containerID="94e18089a942b1057e3d02a09a4b694fbc453308986d6e16d48418e62de0d1f6" Feb 17 01:49:21 crc kubenswrapper[4805]: I0217 01:49:21.123509 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/crc-debug-zckkh" Feb 17 01:49:23 crc kubenswrapper[4805]: E0217 01:49:23.786356 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:49:25 crc kubenswrapper[4805]: E0217 01:49:25.786255 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:49:30 crc kubenswrapper[4805]: I0217 01:49:30.785801 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:49:30 crc kubenswrapper[4805]: E0217 01:49:30.788531 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:49:34 crc kubenswrapper[4805]: I0217 01:49:34.794994 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:49:34 crc kubenswrapper[4805]: E0217 01:49:34.915105 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:49:34 crc kubenswrapper[4805]: E0217 01:49:34.915159 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:49:34 crc kubenswrapper[4805]: E0217 01:49:34.915281 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:49:34 crc kubenswrapper[4805]: E0217 01:49:34.916506 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:49:39 crc kubenswrapper[4805]: E0217 01:49:39.906901 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:49:39 crc kubenswrapper[4805]: E0217 01:49:39.907448 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:49:39 crc kubenswrapper[4805]: E0217 01:49:39.907596 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:49:39 crc kubenswrapper[4805]: E0217 01:49:39.908703 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:49:45 crc kubenswrapper[4805]: I0217 01:49:45.785414 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:49:45 crc kubenswrapper[4805]: E0217 01:49:45.786440 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:49:46 crc kubenswrapper[4805]: E0217 01:49:46.786848 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:49:52 crc kubenswrapper[4805]: E0217 01:49:52.790647 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:49:59 crc kubenswrapper[4805]: I0217 01:49:59.785779 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:49:59 crc kubenswrapper[4805]: E0217 01:49:59.786777 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:49:59 crc kubenswrapper[4805]: E0217 01:49:59.790639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:50:05 crc kubenswrapper[4805]: E0217 01:50:05.787084 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:50:10 crc kubenswrapper[4805]: I0217 01:50:10.784507 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:50:10 crc kubenswrapper[4805]: E0217 01:50:10.785127 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:50:11 crc kubenswrapper[4805]: E0217 01:50:11.786968 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:50:18 crc kubenswrapper[4805]: E0217 01:50:18.787349 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:50:22 crc kubenswrapper[4805]: E0217 01:50:22.787308 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:50:23 crc kubenswrapper[4805]: I0217 01:50:23.784990 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:50:24 crc kubenswrapper[4805]: I0217 01:50:24.913831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944"} Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.112367 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4684eac1-c5ec-46dd-b3f7-87dba4896232/aodh-api/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.264260 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4684eac1-c5ec-46dd-b3f7-87dba4896232/aodh-evaluator/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.291573 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4684eac1-c5ec-46dd-b3f7-87dba4896232/aodh-listener/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.346255 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_4684eac1-c5ec-46dd-b3f7-87dba4896232/aodh-notifier/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.465967 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc7fccf86-pqgwz_c8311529-9b2c-449c-8086-387c3935bbd6/barbican-api-log/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.468030 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc7fccf86-pqgwz_c8311529-9b2c-449c-8086-387c3935bbd6/barbican-api/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.626113 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-584fd88cbb-md2tp_efcafb85-5938-470c-90a7-acfb359882af/barbican-keystone-listener/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.686950 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-584fd88cbb-md2tp_efcafb85-5938-470c-90a7-acfb359882af/barbican-keystone-listener-log/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.776112 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b88c58c9-fwsz2_911a5d99-5b74-4633-9d7a-40bee6bb01a4/barbican-worker/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.822142 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b88c58c9-fwsz2_911a5d99-5b74-4633-9d7a-40bee6bb01a4/barbican-worker-log/0.log" Feb 17 01:50:27 crc kubenswrapper[4805]: I0217 01:50:27.948675 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-29jk6_0093521f-7e1e-421e-a1ce-bf4e5612ba77/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.174428 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_78cfb873-5ac3-472d-91e4-299e5df21da3/proxy-httpd/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.192182 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_78cfb873-5ac3-472d-91e4-299e5df21da3/sg-core/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.250105 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_78cfb873-5ac3-472d-91e4-299e5df21da3/ceilometer-notification-agent/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.363526 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-f7nbf_22c1452d-5db0-4327-b0ad-59b577d64796/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.552868 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265/cinder-api/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.580340 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b4d0c3e-fef3-4f34-b837-6fc9c4ecf265/cinder-api-log/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.673047 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3849aaa3-5b53-484e-9f8d-36eef09cb1b4/cinder-scheduler/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.780411 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3849aaa3-5b53-484e-9f8d-36eef09cb1b4/probe/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.836816 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-r4646_f8ffd9c0-6e03-4875-84f5-56e9cd20aa3a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:28 crc kubenswrapper[4805]: I0217 01:50:28.998669 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5z6fm_574d6680-e445-454e-b172-e677f2339cd2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.044043 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6559847fc9-56cm5_c3625ac6-5d39-453f-9237-65cde10f4733/init/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.234827 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6559847fc9-56cm5_c3625ac6-5d39-453f-9237-65cde10f4733/dnsmasq-dns/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.237327 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6559847fc9-56cm5_c3625ac6-5d39-453f-9237-65cde10f4733/init/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.800490 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7756f86689-rb9tx_df3b59bd-7b58-4ea5-8cdb-f25fcbf13793/heat-api/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.968685 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7b4c598ff7-vv75x_13d834c0-2408-456a-9ffd-9333c2c0e26e/heat-engine/0.log" Feb 17 01:50:29 crc kubenswrapper[4805]: I0217 01:50:29.969162 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-875d6bfdc-p74bh_f164372a-5796-4984-8913-43ed2d3b5e6f/heat-cfnapi/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.107681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5sq65_7077a918-ba16-4a9a-90c5-3fcf25331039/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.142364 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-hb8nn_a6839ad8-db8d-42b3-b9b2-40e9ff97cd8d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.368472 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7d5b44676f-vbgmb_b74fe76f-17fb-498c-a46c-088c2df512d5/keystone-api/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.452413 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29521501-l8z6t_9c029f0d-d189-4126-8bfb-80fd5b1f1247/keystone-cron/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.542888 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7c8e81a5-b0c2-4a31-8383-8022fa10fe96/kube-state-metrics/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.659534 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-86ss7_4a95c358-9f7f-42e7-b497-7f9f76dc01ce/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:30 crc kubenswrapper[4805]: I0217 01:50:30.835890 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_c841db68-4473-4305-91cc-75ec6f257ac0/mysqld-exporter/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.029635 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7fd8fd677-jrz8c_c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac/neutron-api/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.100672 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7fd8fd677-jrz8c_c942e5d4-b3d3-42c1-8afa-63a3a1b1d8ac/neutron-httpd/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.408009 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a6481c50-bc40-4ee2-a161-127c2d2d23df/nova-api-log/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.428657 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_97b23e4f-706d-470f-9b61-ea4e1a3ec9c7/nova-cell0-conductor-conductor/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.646755 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_a6481c50-bc40-4ee2-a161-127c2d2d23df/nova-api-api/0.log" Feb 17 01:50:31 crc kubenswrapper[4805]: I0217 01:50:31.732588 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_fe0a1ae2-2057-4b54-b01d-ca8bafe09be3/nova-cell1-conductor-conductor/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.035834 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a4bdd596-26e7-491d-84ca-d19f950eb389/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.138693 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20146d61-c58a-4fbe-9cb8-9a11af3b159a/nova-metadata-log/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.418899 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ad4deee6-2619-4e76-9a81-9adbaa868ee2/nova-scheduler-scheduler/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.469961 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f85b021d-db5c-4716-b94f-2198c439c614/mysql-bootstrap/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.623755 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f85b021d-db5c-4716-b94f-2198c439c614/mysql-bootstrap/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.713414 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f85b021d-db5c-4716-b94f-2198c439c614/galera/0.log" Feb 17 01:50:32 crc kubenswrapper[4805]: I0217 01:50:32.813188 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cc2653c-ccd4-46b3-993c-2447efa79c98/mysql-bootstrap/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.032162 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cc2653c-ccd4-46b3-993c-2447efa79c98/galera/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.067165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2cc2653c-ccd4-46b3-993c-2447efa79c98/mysql-bootstrap/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.253550 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3d04a0a0-da8e-4d58-b70c-b0e60bd9660c/openstackclient/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.347955 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cpgf5_1fc3dff9-1209-4d8b-8927-96f5ffac33f6/ovn-controller/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.561250 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gp5nc_c76aae77-30fe-4644-96a9-4c4d2978e3d2/openstack-network-exporter/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.719785 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dlg8k_ff3989a8-bd47-4d94-bf91-47e1dd5f61d8/ovsdb-server-init/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: E0217 01:50:33.786620 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.828485 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_20146d61-c58a-4fbe-9cb8-9a11af3b159a/nova-metadata-metadata/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.880874 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dlg8k_ff3989a8-bd47-4d94-bf91-47e1dd5f61d8/ovsdb-server-init/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.898527 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dlg8k_ff3989a8-bd47-4d94-bf91-47e1dd5f61d8/ovs-vswitchd/0.log" Feb 17 01:50:33 crc kubenswrapper[4805]: I0217 01:50:33.907797 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dlg8k_ff3989a8-bd47-4d94-bf91-47e1dd5f61d8/ovsdb-server/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.085310 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_106aacfc-bb6d-46b1-b61b-35ee9f84e1d3/openstack-network-exporter/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.129245 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ztjgw_0a8a3709-95d6-48e3-94bb-b41bb5ed017c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.306976 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_106aacfc-bb6d-46b1-b61b-35ee9f84e1d3/ovn-northd/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.355997 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0176eefc-4b9d-4e1f-913e-495ceb0c7c78/openstack-network-exporter/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.406055 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0176eefc-4b9d-4e1f-913e-495ceb0c7c78/ovsdbserver-nb/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.587663 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e51af0b4-1c0c-4763-81f7-bf6ca2776b80/openstack-network-exporter/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.593066 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e51af0b4-1c0c-4763-81f7-bf6ca2776b80/ovsdbserver-sb/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.779483 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65599f5544-8m95b_ab220f14-8200-4576-a0bf-ee0bc1d2e11e/placement-api/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.868682 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65599f5544-8m95b_ab220f14-8200-4576-a0bf-ee0bc1d2e11e/placement-log/0.log" Feb 17 01:50:34 crc kubenswrapper[4805]: I0217 01:50:34.974111 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ec567d49-235c-4e83-8b76-c5df4e187fc0/init-config-reloader/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.127265 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ec567d49-235c-4e83-8b76-c5df4e187fc0/init-config-reloader/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.154508 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ec567d49-235c-4e83-8b76-c5df4e187fc0/config-reloader/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.205183 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ec567d49-235c-4e83-8b76-c5df4e187fc0/thanos-sidecar/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.251549 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_ec567d49-235c-4e83-8b76-c5df4e187fc0/prometheus/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.409268 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d97e2601-4fd8-4dbf-bef1-c8483ba79667/setup-container/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.543778 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d97e2601-4fd8-4dbf-bef1-c8483ba79667/setup-container/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.611771 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d97e2601-4fd8-4dbf-bef1-c8483ba79667/rabbitmq/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.635312 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1fd9b570-6f4d-49b9-96a4-54bb6744ea22/setup-container/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.821386 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1fd9b570-6f4d-49b9-96a4-54bb6744ea22/rabbitmq/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.865630 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lx94c_a0357546-9ba3-46f6-98cd-bee9c102f671/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:35 crc kubenswrapper[4805]: I0217 01:50:35.889438 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1fd9b570-6f4d-49b9-96a4-54bb6744ea22/setup-container/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.110742 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x2pbx_0fe4c30a-bcb1-429d-8796-a1bacaec3988/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.139583 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mv28h_dfa04663-d25d-40ee-a669-097d415e754e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.392446 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j4hz5_daab539c-cd12-429d-b5ec-a957900aa0c2/ssh-known-hosts-edpm-deployment/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.525637 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b9959496c-vdvnd_f289780f-6025-465b-859f-e951ffd9e8e5/proxy-server/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.558010 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b9959496c-vdvnd_f289780f-6025-465b-859f-e951ffd9e8e5/proxy-httpd/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.600085 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-c298m_8150553f-2c0e-4371-9b0d-22364c3c9db4/swift-ring-rebalance/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.742293 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/account-reaper/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.779665 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/account-auditor/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: E0217 01:50:36.791292 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.880161 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/account-replicator/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.935344 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/container-auditor/0.log" Feb 17 01:50:36 crc kubenswrapper[4805]: I0217 01:50:36.968799 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/account-server/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.044138 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/container-replicator/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.059371 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/container-server/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.148095 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/container-updater/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.218444 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/object-expirer/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.254815 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/object-auditor/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.305374 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/object-replicator/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.417223 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/object-server/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.442812 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/rsync/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.457191 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/object-updater/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.508153 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_de228348-37d1-4ec0-9a47-11f4d895e6d6/swift-recon-cron/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.710525 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bmf78_9f253d42-6a7d-4e45-94e3-52965a6880a4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.715427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4c8j7_c866feaf-36a5-4fe7-b8e7-1ba3de81424f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:37 crc kubenswrapper[4805]: I0217 01:50:37.906948 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-fhz6t_14e189b9-6c07-4b19-aba3-9f357bfa7639/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:38 crc kubenswrapper[4805]: I0217 01:50:38.008404 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g8524_f91e2557-4edd-4cab-ae36-dce0f28acbb0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:38 crc kubenswrapper[4805]: I0217 01:50:38.109017 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nbz2b_7f69bd70-7951-4978-ad8e-dea9637e476a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:38 crc kubenswrapper[4805]: I0217 01:50:38.291280 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qxxgp_ed5ab321-ffbb-45a2-8cec-03034de09b60/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:38 crc kubenswrapper[4805]: I0217 01:50:38.420581 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rttgh_2f4b1196-c56e-477f-93ac-a1911fe564ef/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:38 crc kubenswrapper[4805]: I0217 01:50:38.474250 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mnfz7_5f257638-7e99-4278-9d14-395b4c2a89ac/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 01:50:42 crc kubenswrapper[4805]: I0217 01:50:42.399542 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ccaa39fb-d7dc-4011-8b95-cd12af49adc5/memcached/0.log" Feb 17 01:50:45 crc kubenswrapper[4805]: E0217 01:50:45.787468 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:50:49 crc kubenswrapper[4805]: E0217 01:50:49.787497 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:50:56 crc kubenswrapper[4805]: E0217 01:50:56.788252 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:51:02 crc kubenswrapper[4805]: E0217 01:51:02.787639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.284857 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/util/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.445013 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/util/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.480677 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/pull/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.514990 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/pull/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.679283 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/util/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.685427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/extract/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: I0217 01:51:07.710946 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4f437482a60a328c43ff6d13373b556447b99ac0a50f376a5368bf17b8fcs8l_30cbd298-b82b-492f-ae51-31b5ddb442ec/pull/0.log" Feb 17 01:51:07 crc kubenswrapper[4805]: E0217 01:51:07.786965 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:51:08 crc kubenswrapper[4805]: I0217 01:51:08.309758 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-jbspb_7db2d988-eae5-4cd7-9c68-b0fb971fc93b/manager/0.log" Feb 17 01:51:08 crc kubenswrapper[4805]: I0217 01:51:08.518273 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-9kk4z_5fbc6ce1-751b-42d1-9f5c-1acc6bf0fdd2/manager/0.log" Feb 17 01:51:08 crc kubenswrapper[4805]: I0217 01:51:08.933556 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-8zbbf_1eea0362-7f54-47ba-9669-c561ebcfd69d/manager/0.log" Feb 17 01:51:09 crc kubenswrapper[4805]: I0217 01:51:09.008437 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-7w28d_92f8fa10-b559-4065-bdc5-1bd1b6b89b22/manager/0.log" Feb 17 01:51:09 crc kubenswrapper[4805]: I0217 01:51:09.646872 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-lw4pd_97c634de-ffb7-4340-b622-782ee351de54/manager/0.log" Feb 17 01:51:10 crc kubenswrapper[4805]: I0217 01:51:10.189805 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-q8nbq_797181b9-d095-42dc-9bf6-f87665ba40c5/manager/0.log" Feb 17 01:51:10 crc kubenswrapper[4805]: I0217 01:51:10.482756 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-pmmsh_1fa270c7-9d09-444c-9ccd-70febd3fc194/manager/0.log" Feb 17 01:51:10 crc kubenswrapper[4805]: I0217 01:51:10.580684 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-xmfct_cf23fb16-30b5-49d7-a204-2140b7afa8dc/manager/0.log" Feb 17 01:51:10 crc kubenswrapper[4805]: I0217 01:51:10.643037 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-sbwq4_63f821ff-0cb4-4722-87df-511e1758288e/manager/0.log" Feb 17 01:51:10 crc kubenswrapper[4805]: I0217 01:51:10.846691 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-n26v4_13981a34-157a-433a-bb3b-5ec086dc6506/manager/0.log" Feb 17 01:51:11 crc kubenswrapper[4805]: I0217 01:51:11.003690 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-kdndp_da0ffea9-23b4-41d5-b3db-8d76372c949d/manager/0.log" Feb 17 01:51:11 crc kubenswrapper[4805]: I0217 01:51:11.139354 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-klb75_69a2b32d-8ef4-4bcf-a048-d169e9577f38/manager/0.log" Feb 17 01:51:11 crc kubenswrapper[4805]: I0217 01:51:11.353266 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cvbnrd_bee5466c-cf0f-4af9-8c9f-f323e814d02d/manager/0.log" Feb 17 01:51:12 crc kubenswrapper[4805]: I0217 01:51:12.162605 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77b758d6b5-kkx5m_1ba3b534-fbdb-4c50-9b8b-1c3e4cc32855/operator/0.log" Feb 17 01:51:12 crc kubenswrapper[4805]: I0217 01:51:12.406289 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qt465_a080fb8f-92cc-40dd-b627-f4c04f83eace/registry-server/0.log" Feb 17 01:51:12 crc kubenswrapper[4805]: I0217 01:51:12.663250 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-h4vnc_8d4c5113-e984-4b0c-b1c2-45b31750d654/manager/0.log" Feb 17 01:51:12 crc kubenswrapper[4805]: I0217 01:51:12.821946 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rwt67_46c67b9e-b2a0-4de9-9ecd-581c646896fe/manager/0.log" Feb 17 01:51:13 crc kubenswrapper[4805]: I0217 01:51:13.035942 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wkvdc_ad57ab8f-521c-44a5-b5d5-22264e6a79b0/operator/0.log" Feb 17 01:51:13 crc kubenswrapper[4805]: I0217 01:51:13.412811 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-prh9h_2f94b9ee-0d59-4dfc-8a01-c506d368327f/manager/0.log" Feb 17 01:51:13 crc kubenswrapper[4805]: I0217 01:51:13.912511 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-q8f27_e878e0f7-5fd0-4ab2-8503-ce2b71c26dbe/manager/0.log" Feb 17 01:51:14 crc kubenswrapper[4805]: I0217 01:51:14.154798 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6bf489ffd7-pw66z_b83736b0-6ae8-4fc4-ab02-f731ce083723/manager/0.log" Feb 17 01:51:14 crc kubenswrapper[4805]: I0217 01:51:14.313233 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-6phwc_807e772b-99f0-4578-b462-14b359040c87/manager/0.log" Feb 17 01:51:14 crc kubenswrapper[4805]: I0217 01:51:14.321214 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fddb9857-6r6nf_ed86b6a0-d091-482b-8bdb-0d0ae3153733/manager/0.log" Feb 17 01:51:14 crc kubenswrapper[4805]: I0217 01:51:14.867810 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-q9dgv_fa1c6038-a220-4d79-8d11-97d0dbbb4b38/manager/0.log" Feb 17 01:51:16 crc kubenswrapper[4805]: E0217 01:51:16.786243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:51:20 crc kubenswrapper[4805]: I0217 01:51:20.406000 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-vjpw9_d5c2df2a-fe2c-4a7f-ab0c-247fac6a47e9/manager/0.log" Feb 17 01:51:20 crc kubenswrapper[4805]: E0217 01:51:20.786904 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:51:31 crc kubenswrapper[4805]: E0217 01:51:31.787372 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:51:34 crc kubenswrapper[4805]: E0217 01:51:34.796951 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:51:40 crc kubenswrapper[4805]: I0217 01:51:40.734984 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w8ppr_5d3c99c6-7195-427e-8cd4-f484ad5ee41c/control-plane-machine-set-operator/0.log" Feb 17 01:51:40 crc kubenswrapper[4805]: I0217 01:51:40.938751 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bb4kv_fafbbfd8-7e64-432a-b47c-7ad2e9388f2c/machine-api-operator/0.log" Feb 17 01:51:40 crc kubenswrapper[4805]: I0217 01:51:40.946989 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bb4kv_fafbbfd8-7e64-432a-b47c-7ad2e9388f2c/kube-rbac-proxy/0.log" Feb 17 01:51:42 crc kubenswrapper[4805]: E0217 01:51:42.788001 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:51:47 crc kubenswrapper[4805]: E0217 01:51:47.787722 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:51:53 crc kubenswrapper[4805]: E0217 01:51:53.788011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:51:56 crc kubenswrapper[4805]: I0217 01:51:56.276601 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2h6qr_b3a98919-e2b8-4289-a46c-834a0c1f2460/cert-manager-controller/0.log" Feb 17 01:51:56 crc kubenswrapper[4805]: I0217 01:51:56.413387 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-l9t8q_0fd44ff9-92b9-4699-8435-a98175b3437e/cert-manager-cainjector/0.log" Feb 17 01:51:56 crc kubenswrapper[4805]: I0217 01:51:56.439948 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-6jgm7_212fc243-8a59-46c7-9885-ef307f45edaa/cert-manager-webhook/0.log" Feb 17 01:51:59 crc kubenswrapper[4805]: E0217 01:51:59.787123 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:52:08 crc kubenswrapper[4805]: E0217 01:52:08.786607 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.085485 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-ww84p_3a4aeea4-aa38-45c9-9aaa-13670a1602fe/nmstate-console-plugin/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.256592 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j2dnr_35950c0f-8c05-4840-b6cb-7b61fd07008d/nmstate-handler/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.287827 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7lswf_3864820c-89a0-409c-84a6-7b4145026b77/kube-rbac-proxy/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.408220 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7lswf_3864820c-89a0-409c-84a6-7b4145026b77/nmstate-metrics/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.474068 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-chqqc_a78196b5-495a-412c-b5fb-a1905e5fbeff/nmstate-operator/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: I0217 01:52:13.612846 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-m5d7x_cb306405-b68c-4891-a537-df576d06ea6f/nmstate-webhook/0.log" Feb 17 01:52:13 crc kubenswrapper[4805]: E0217 01:52:13.786933 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.920777 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:20 crc kubenswrapper[4805]: E0217 01:52:20.921764 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0392424-c102-4e07-a464-e32bd41da3ef" containerName="container-00" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.921776 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0392424-c102-4e07-a464-e32bd41da3ef" containerName="container-00" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.922424 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0392424-c102-4e07-a464-e32bd41da3ef" containerName="container-00" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.923927 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.949921 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.963986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.964503 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:20 crc kubenswrapper[4805]: I0217 01:52:20.964610 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672ln\" (UniqueName: \"kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.066423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.066508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-672ln\" (UniqueName: \"kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.066621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.067080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.067121 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.089101 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-672ln\" (UniqueName: \"kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln\") pod \"redhat-marketplace-2n6lg\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.264647 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:21 crc kubenswrapper[4805]: E0217 01:52:21.788308 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:52:21 crc kubenswrapper[4805]: I0217 01:52:21.834667 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:22 crc kubenswrapper[4805]: I0217 01:52:22.073891 4805 generic.go:334] "Generic (PLEG): container finished" podID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerID="dbfad150ca38f502126278a5f46e0bd9732dee09dc02c303d9c257a8805f0288" exitCode=0 Feb 17 01:52:22 crc kubenswrapper[4805]: I0217 01:52:22.073937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerDied","Data":"dbfad150ca38f502126278a5f46e0bd9732dee09dc02c303d9c257a8805f0288"} Feb 17 01:52:22 crc kubenswrapper[4805]: I0217 01:52:22.073963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerStarted","Data":"e69dfc9b78fd721fede3ea27dc36863ecd1aaa17c5f13ab8cc23e0b455775386"} Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.086499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerStarted","Data":"dde19cbe546ab9eda697accee7ea28c08259ce8fe4be54517e3cfed732bcceae"} Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.506542 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.519850 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.527316 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.549350 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.549493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.549595 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9jx\" (UniqueName: \"kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.651136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.651264 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.651387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9jx\" (UniqueName: \"kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.651725 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.651992 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.672217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9jx\" (UniqueName: \"kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx\") pod \"certified-operators-tvbhm\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:23 crc kubenswrapper[4805]: I0217 01:52:23.837570 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:24 crc kubenswrapper[4805]: I0217 01:52:24.101501 4805 generic.go:334] "Generic (PLEG): container finished" podID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerID="dde19cbe546ab9eda697accee7ea28c08259ce8fe4be54517e3cfed732bcceae" exitCode=0 Feb 17 01:52:24 crc kubenswrapper[4805]: I0217 01:52:24.102034 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerDied","Data":"dde19cbe546ab9eda697accee7ea28c08259ce8fe4be54517e3cfed732bcceae"} Feb 17 01:52:24 crc kubenswrapper[4805]: I0217 01:52:24.333280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.121832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerStarted","Data":"88205f038e7fd57f4988c0bed83bea11eecb88b07afb083a64fad48d37681a24"} Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.152619 4805 generic.go:334] "Generic (PLEG): container finished" podID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerID="40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0" exitCode=0 Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.152662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerDied","Data":"40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0"} Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.152686 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerStarted","Data":"3c17da6262ff8264ba15b7e567122c9c51615e48abb25c6ce02f75ed3d23fa4a"} Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.166772 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2n6lg" podStartSLOduration=2.733632251 podStartE2EDuration="5.166754042s" podCreationTimestamp="2026-02-17 01:52:20 +0000 UTC" firstStartedPulling="2026-02-17 01:52:22.075763498 +0000 UTC m=+5368.091572886" lastFinishedPulling="2026-02-17 01:52:24.508885279 +0000 UTC m=+5370.524694677" observedRunningTime="2026-02-17 01:52:25.163607574 +0000 UTC m=+5371.179416972" watchObservedRunningTime="2026-02-17 01:52:25.166754042 +0000 UTC m=+5371.182563440" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.290920 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.292806 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.305551 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.396916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.397264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.397477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnth9\" (UniqueName: \"kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.499263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnth9\" (UniqueName: \"kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.499389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.499406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.499831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.500159 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.536389 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnth9\" (UniqueName: \"kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9\") pod \"community-operators-zddsc\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: I0217 01:52:25.620133 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:25 crc kubenswrapper[4805]: E0217 01:52:25.791097 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:52:26 crc kubenswrapper[4805]: I0217 01:52:26.139200 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:26 crc kubenswrapper[4805]: W0217 01:52:26.359827 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda72d8a88_2ae5_497c_a5b9_bf12faedea45.slice/crio-0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8 WatchSource:0}: Error finding container 0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8: Status 404 returned error can't find the container with id 0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8 Feb 17 01:52:27 crc kubenswrapper[4805]: I0217 01:52:27.177460 4805 generic.go:334] "Generic (PLEG): container finished" podID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerID="b940aba2478ad9d0a424a39448533cf359deb636c2bb850101b42d24f02e9f02" exitCode=0 Feb 17 01:52:27 crc kubenswrapper[4805]: I0217 01:52:27.177534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerDied","Data":"b940aba2478ad9d0a424a39448533cf359deb636c2bb850101b42d24f02e9f02"} Feb 17 01:52:27 crc kubenswrapper[4805]: I0217 01:52:27.178163 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerStarted","Data":"0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8"} Feb 17 01:52:27 crc kubenswrapper[4805]: I0217 01:52:27.179877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerStarted","Data":"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c"} Feb 17 01:52:29 crc kubenswrapper[4805]: I0217 01:52:29.200015 4805 generic.go:334] "Generic (PLEG): container finished" podID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerID="1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c" exitCode=0 Feb 17 01:52:29 crc kubenswrapper[4805]: I0217 01:52:29.200116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerDied","Data":"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c"} Feb 17 01:52:29 crc kubenswrapper[4805]: I0217 01:52:29.205270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerStarted","Data":"b41c93382af16f9136f77f0fb2d1d8e12f8c2267aad15475fec49f5bf351a0fb"} Feb 17 01:52:30 crc kubenswrapper[4805]: I0217 01:52:30.190637 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5659c765-xsxhh_b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01/kube-rbac-proxy/0.log" Feb 17 01:52:30 crc kubenswrapper[4805]: I0217 01:52:30.217348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerStarted","Data":"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f"} Feb 17 01:52:30 crc kubenswrapper[4805]: I0217 01:52:30.239558 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tvbhm" podStartSLOduration=2.789626579 podStartE2EDuration="7.239540863s" podCreationTimestamp="2026-02-17 01:52:23 +0000 UTC" firstStartedPulling="2026-02-17 01:52:25.157472993 +0000 UTC m=+5371.173282391" lastFinishedPulling="2026-02-17 01:52:29.607387267 +0000 UTC m=+5375.623196675" observedRunningTime="2026-02-17 01:52:30.235584862 +0000 UTC m=+5376.251394260" watchObservedRunningTime="2026-02-17 01:52:30.239540863 +0000 UTC m=+5376.255350261" Feb 17 01:52:30 crc kubenswrapper[4805]: I0217 01:52:30.368521 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5659c765-xsxhh_b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01/manager/0.log" Feb 17 01:52:31 crc kubenswrapper[4805]: I0217 01:52:31.228511 4805 generic.go:334] "Generic (PLEG): container finished" podID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerID="b41c93382af16f9136f77f0fb2d1d8e12f8c2267aad15475fec49f5bf351a0fb" exitCode=0 Feb 17 01:52:31 crc kubenswrapper[4805]: I0217 01:52:31.228603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerDied","Data":"b41c93382af16f9136f77f0fb2d1d8e12f8c2267aad15475fec49f5bf351a0fb"} Feb 17 01:52:31 crc kubenswrapper[4805]: I0217 01:52:31.265470 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:31 crc kubenswrapper[4805]: I0217 01:52:31.265518 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:31 crc kubenswrapper[4805]: I0217 01:52:31.365482 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:32 crc kubenswrapper[4805]: I0217 01:52:32.250745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerStarted","Data":"10ff366d67f3fafac353bf2ffa95f22f20ca72ece5be63e03068e854e4549009"} Feb 17 01:52:32 crc kubenswrapper[4805]: I0217 01:52:32.276847 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zddsc" podStartSLOduration=2.8593008060000003 podStartE2EDuration="7.276826806s" podCreationTimestamp="2026-02-17 01:52:25 +0000 UTC" firstStartedPulling="2026-02-17 01:52:27.179223224 +0000 UTC m=+5373.195032622" lastFinishedPulling="2026-02-17 01:52:31.596749224 +0000 UTC m=+5377.612558622" observedRunningTime="2026-02-17 01:52:32.270651914 +0000 UTC m=+5378.286461322" watchObservedRunningTime="2026-02-17 01:52:32.276826806 +0000 UTC m=+5378.292636204" Feb 17 01:52:32 crc kubenswrapper[4805]: I0217 01:52:32.317466 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:33 crc kubenswrapper[4805]: I0217 01:52:33.838603 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:33 crc kubenswrapper[4805]: I0217 01:52:33.839143 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:34 crc kubenswrapper[4805]: I0217 01:52:34.691947 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:34 crc kubenswrapper[4805]: I0217 01:52:34.692216 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2n6lg" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="registry-server" containerID="cri-o://88205f038e7fd57f4988c0bed83bea11eecb88b07afb083a64fad48d37681a24" gracePeriod=2 Feb 17 01:52:34 crc kubenswrapper[4805]: E0217 01:52:34.797661 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:52:34 crc kubenswrapper[4805]: I0217 01:52:34.896563 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tvbhm" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="registry-server" probeResult="failure" output=< Feb 17 01:52:34 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:52:34 crc kubenswrapper[4805]: > Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.283342 4805 generic.go:334] "Generic (PLEG): container finished" podID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerID="88205f038e7fd57f4988c0bed83bea11eecb88b07afb083a64fad48d37681a24" exitCode=0 Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.283368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerDied","Data":"88205f038e7fd57f4988c0bed83bea11eecb88b07afb083a64fad48d37681a24"} Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.283426 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2n6lg" event={"ID":"20a9a01b-bf8f-43af-9870-a15db4a0e1fa","Type":"ContainerDied","Data":"e69dfc9b78fd721fede3ea27dc36863ecd1aaa17c5f13ab8cc23e0b455775386"} Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.283444 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e69dfc9b78fd721fede3ea27dc36863ecd1aaa17c5f13ab8cc23e0b455775386" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.284698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.328195 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-672ln\" (UniqueName: \"kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln\") pod \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.328296 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities\") pod \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.328370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content\") pod \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\" (UID: \"20a9a01b-bf8f-43af-9870-a15db4a0e1fa\") " Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.329282 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities" (OuterVolumeSpecName: "utilities") pod "20a9a01b-bf8f-43af-9870-a15db4a0e1fa" (UID: "20a9a01b-bf8f-43af-9870-a15db4a0e1fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.337718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln" (OuterVolumeSpecName: "kube-api-access-672ln") pod "20a9a01b-bf8f-43af-9870-a15db4a0e1fa" (UID: "20a9a01b-bf8f-43af-9870-a15db4a0e1fa"). InnerVolumeSpecName "kube-api-access-672ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.367470 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20a9a01b-bf8f-43af-9870-a15db4a0e1fa" (UID: "20a9a01b-bf8f-43af-9870-a15db4a0e1fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.431121 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-672ln\" (UniqueName: \"kubernetes.io/projected/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-kube-api-access-672ln\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.431155 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.431165 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a9a01b-bf8f-43af-9870-a15db4a0e1fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.620348 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:35 crc kubenswrapper[4805]: I0217 01:52:35.620410 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:36 crc kubenswrapper[4805]: I0217 01:52:36.294861 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2n6lg" Feb 17 01:52:36 crc kubenswrapper[4805]: I0217 01:52:36.338670 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:36 crc kubenswrapper[4805]: I0217 01:52:36.354977 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2n6lg"] Feb 17 01:52:36 crc kubenswrapper[4805]: I0217 01:52:36.683251 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zddsc" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="registry-server" probeResult="failure" output=< Feb 17 01:52:36 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:52:36 crc kubenswrapper[4805]: > Feb 17 01:52:36 crc kubenswrapper[4805]: E0217 01:52:36.790234 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:52:36 crc kubenswrapper[4805]: I0217 01:52:36.802888 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" path="/var/lib/kubelet/pods/20a9a01b-bf8f-43af-9870-a15db4a0e1fa/volumes" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.495430 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:52:40 crc kubenswrapper[4805]: E0217 01:52:40.496474 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="registry-server" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.496488 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="registry-server" Feb 17 01:52:40 crc kubenswrapper[4805]: E0217 01:52:40.496505 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="extract-utilities" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.496511 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="extract-utilities" Feb 17 01:52:40 crc kubenswrapper[4805]: E0217 01:52:40.496535 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="extract-content" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.496542 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="extract-content" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.496771 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a9a01b-bf8f-43af-9870-a15db4a0e1fa" containerName="registry-server" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.498391 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.518484 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.590704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.590795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.590918 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksvb\" (UniqueName: \"kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.692684 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ksvb\" (UniqueName: \"kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.692858 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.692933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.693512 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.693575 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.714944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ksvb\" (UniqueName: \"kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb\") pod \"redhat-operators-bpcjz\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:40 crc kubenswrapper[4805]: I0217 01:52:40.831707 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:41 crc kubenswrapper[4805]: I0217 01:52:41.363331 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:52:42 crc kubenswrapper[4805]: I0217 01:52:42.352708 4805 generic.go:334] "Generic (PLEG): container finished" podID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerID="3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb" exitCode=0 Feb 17 01:52:42 crc kubenswrapper[4805]: I0217 01:52:42.352955 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerDied","Data":"3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb"} Feb 17 01:52:42 crc kubenswrapper[4805]: I0217 01:52:42.352980 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerStarted","Data":"349da58dba1c5dd20a982b9e6bceef0e3adf7b11eb79d2d37e53e0e6ca9100e5"} Feb 17 01:52:43 crc kubenswrapper[4805]: I0217 01:52:43.373987 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerStarted","Data":"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be"} Feb 17 01:52:43 crc kubenswrapper[4805]: I0217 01:52:43.921215 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:43 crc kubenswrapper[4805]: I0217 01:52:43.991133 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.442925 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xw7l6_841806ee-4049-4561-b025-3af0469f8fb2/prometheus-operator/0.log" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.636960 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_c9f5bbbc-6740-427e-90d5-69011b2966cd/prometheus-operator-admission-webhook/0.log" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.672074 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.708606 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_93be50de-fcd3-41d1-8641-1b7c73cb26ea/prometheus-operator-admission-webhook/0.log" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.722808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:45 crc kubenswrapper[4805]: E0217 01:52:45.787139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.897022 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-rztzq_ec346c4e-f52f-4ee4-9697-e4b95405fe5d/operator/0.log" Feb 17 01:52:45 crc kubenswrapper[4805]: I0217 01:52:45.939217 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-lhfgx_ae33ba11-f42a-4134-be89-fbe93e76f0ae/observability-ui-dashboards/0.log" Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.088077 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-btvcr_6b7fab38-3b46-42bc-a296-945f451f04f6/perses-operator/0.log" Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.287651 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.287901 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tvbhm" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="registry-server" containerID="cri-o://262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f" gracePeriod=2 Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.872785 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.956467 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9jx\" (UniqueName: \"kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx\") pod \"0158c6c9-711a-47a9-baae-6360bbed01fd\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.956626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content\") pod \"0158c6c9-711a-47a9-baae-6360bbed01fd\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.956718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities\") pod \"0158c6c9-711a-47a9-baae-6360bbed01fd\" (UID: \"0158c6c9-711a-47a9-baae-6360bbed01fd\") " Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.957480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities" (OuterVolumeSpecName: "utilities") pod "0158c6c9-711a-47a9-baae-6360bbed01fd" (UID: "0158c6c9-711a-47a9-baae-6360bbed01fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.960968 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:46 crc kubenswrapper[4805]: I0217 01:52:46.975189 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx" (OuterVolumeSpecName: "kube-api-access-9q9jx") pod "0158c6c9-711a-47a9-baae-6360bbed01fd" (UID: "0158c6c9-711a-47a9-baae-6360bbed01fd"). InnerVolumeSpecName "kube-api-access-9q9jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.001823 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0158c6c9-711a-47a9-baae-6360bbed01fd" (UID: "0158c6c9-711a-47a9-baae-6360bbed01fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.063235 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9jx\" (UniqueName: \"kubernetes.io/projected/0158c6c9-711a-47a9-baae-6360bbed01fd-kube-api-access-9q9jx\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.063276 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0158c6c9-711a-47a9-baae-6360bbed01fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.408265 4805 generic.go:334] "Generic (PLEG): container finished" podID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerID="262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f" exitCode=0 Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.408422 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerDied","Data":"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f"} Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.408702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvbhm" event={"ID":"0158c6c9-711a-47a9-baae-6360bbed01fd","Type":"ContainerDied","Data":"3c17da6262ff8264ba15b7e567122c9c51615e48abb25c6ce02f75ed3d23fa4a"} Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.408732 4805 scope.go:117] "RemoveContainer" containerID="262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.408497 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvbhm" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.413494 4805 generic.go:334] "Generic (PLEG): container finished" podID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerID="e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be" exitCode=0 Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.413608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerDied","Data":"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be"} Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.467393 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.469223 4805 scope.go:117] "RemoveContainer" containerID="1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.482802 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tvbhm"] Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.510922 4805 scope.go:117] "RemoveContainer" containerID="40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.545965 4805 scope.go:117] "RemoveContainer" containerID="262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f" Feb 17 01:52:47 crc kubenswrapper[4805]: E0217 01:52:47.546785 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f\": container with ID starting with 262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f not found: ID does not exist" containerID="262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.546813 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f"} err="failed to get container status \"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f\": rpc error: code = NotFound desc = could not find container \"262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f\": container with ID starting with 262560beeaec2b2ba61ebc770266d2e0becbf96b187fd2ce9d869712a5592c0f not found: ID does not exist" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.546834 4805 scope.go:117] "RemoveContainer" containerID="1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c" Feb 17 01:52:47 crc kubenswrapper[4805]: E0217 01:52:47.547192 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c\": container with ID starting with 1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c not found: ID does not exist" containerID="1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.547231 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c"} err="failed to get container status \"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c\": rpc error: code = NotFound desc = could not find container \"1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c\": container with ID starting with 1ae86352a31869209f7099e480c74cfb6de09e49e7e2d0b87e3abbdaf9f6d92c not found: ID does not exist" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.547256 4805 scope.go:117] "RemoveContainer" containerID="40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0" Feb 17 01:52:47 crc kubenswrapper[4805]: E0217 01:52:47.547662 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0\": container with ID starting with 40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0 not found: ID does not exist" containerID="40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0" Feb 17 01:52:47 crc kubenswrapper[4805]: I0217 01:52:47.547703 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0"} err="failed to get container status \"40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0\": rpc error: code = NotFound desc = could not find container \"40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0\": container with ID starting with 40e073599c3e49b43fc1ab78a9e034587689788bc78c95494464f6ae5332b7b0 not found: ID does not exist" Feb 17 01:52:48 crc kubenswrapper[4805]: I0217 01:52:48.426966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerStarted","Data":"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372"} Feb 17 01:52:48 crc kubenswrapper[4805]: I0217 01:52:48.465622 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bpcjz" podStartSLOduration=2.951276034 podStartE2EDuration="8.465599636s" podCreationTimestamp="2026-02-17 01:52:40 +0000 UTC" firstStartedPulling="2026-02-17 01:52:42.355749299 +0000 UTC m=+5388.371558697" lastFinishedPulling="2026-02-17 01:52:47.870072901 +0000 UTC m=+5393.885882299" observedRunningTime="2026-02-17 01:52:48.448673194 +0000 UTC m=+5394.464482632" watchObservedRunningTime="2026-02-17 01:52:48.465599636 +0000 UTC m=+5394.481409044" Feb 17 01:52:48 crc kubenswrapper[4805]: I0217 01:52:48.493066 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:48 crc kubenswrapper[4805]: I0217 01:52:48.493299 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zddsc" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="registry-server" containerID="cri-o://10ff366d67f3fafac353bf2ffa95f22f20ca72ece5be63e03068e854e4549009" gracePeriod=2 Feb 17 01:52:48 crc kubenswrapper[4805]: I0217 01:52:48.799401 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" path="/var/lib/kubelet/pods/0158c6c9-711a-47a9-baae-6360bbed01fd/volumes" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.437194 4805 generic.go:334] "Generic (PLEG): container finished" podID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerID="10ff366d67f3fafac353bf2ffa95f22f20ca72ece5be63e03068e854e4549009" exitCode=0 Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.437353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerDied","Data":"10ff366d67f3fafac353bf2ffa95f22f20ca72ece5be63e03068e854e4549009"} Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.437549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zddsc" event={"ID":"a72d8a88-2ae5-497c-a5b9-bf12faedea45","Type":"ContainerDied","Data":"0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8"} Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.437573 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca5dfa4e41e24f0b8a7dca774344bc67daef5d645f37652c329b8046516ffb8" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.562731 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.717202 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities\") pod \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.717380 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content\") pod \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.717440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnth9\" (UniqueName: \"kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9\") pod \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\" (UID: \"a72d8a88-2ae5-497c-a5b9-bf12faedea45\") " Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.724035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9" (OuterVolumeSpecName: "kube-api-access-pnth9") pod "a72d8a88-2ae5-497c-a5b9-bf12faedea45" (UID: "a72d8a88-2ae5-497c-a5b9-bf12faedea45"). InnerVolumeSpecName "kube-api-access-pnth9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.748912 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities" (OuterVolumeSpecName: "utilities") pod "a72d8a88-2ae5-497c-a5b9-bf12faedea45" (UID: "a72d8a88-2ae5-497c-a5b9-bf12faedea45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.792589 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a72d8a88-2ae5-497c-a5b9-bf12faedea45" (UID: "a72d8a88-2ae5-497c-a5b9-bf12faedea45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.823827 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.823868 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnth9\" (UniqueName: \"kubernetes.io/projected/a72d8a88-2ae5-497c-a5b9-bf12faedea45-kube-api-access-pnth9\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:49 crc kubenswrapper[4805]: I0217 01:52:49.823883 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a72d8a88-2ae5-497c-a5b9-bf12faedea45-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.447579 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zddsc" Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.485173 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.494385 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zddsc"] Feb 17 01:52:50 crc kubenswrapper[4805]: E0217 01:52:50.786578 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.800439 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" path="/var/lib/kubelet/pods/a72d8a88-2ae5-497c-a5b9-bf12faedea45/volumes" Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.832520 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:50 crc kubenswrapper[4805]: I0217 01:52:50.832571 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:52:52 crc kubenswrapper[4805]: I0217 01:52:52.618728 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bpcjz" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" probeResult="failure" output=< Feb 17 01:52:52 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:52:52 crc kubenswrapper[4805]: > Feb 17 01:52:53 crc kubenswrapper[4805]: I0217 01:52:53.077506 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:52:53 crc kubenswrapper[4805]: I0217 01:52:53.077565 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:52:59 crc kubenswrapper[4805]: E0217 01:52:59.787189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:53:02 crc kubenswrapper[4805]: I0217 01:53:02.627937 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bpcjz" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" probeResult="failure" output=< Feb 17 01:53:02 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:53:02 crc kubenswrapper[4805]: > Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.299813 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-nhjhc_cb0332e7-6f7b-4294-878c-85fc89493a58/cluster-logging-operator/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.337216 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ncz6q_fddfe695-106b-4180-b8bb-57ad148b8a6d/collector/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.478956 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_03d9a31a-0121-42e4-a82e-7ee97d31beb1/loki-compactor/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.562580 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-h56f6_a39490eb-8fc3-40ae-9968-453acf06f5da/loki-distributor/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.693944 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-648db9fc4d-chpzm_605689df-27a1-4160-b336-40c665824a83/gateway/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.725439 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-648db9fc4d-chpzm_605689df-27a1-4160-b336-40c665824a83/opa/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.841203 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-648db9fc4d-nsb7r_c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359/gateway/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.875628 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-648db9fc4d-nsb7r_c5ac6ee2-b17b-4e79-8c1f-5cda68aa4359/opa/0.log" Feb 17 01:53:04 crc kubenswrapper[4805]: I0217 01:53:04.947837 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_a4aa5b24-6f45-4330-bda1-89fe3963ea2b/loki-index-gateway/0.log" Feb 17 01:53:05 crc kubenswrapper[4805]: I0217 01:53:05.089281 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_b5f0edd5-0fe1-4af9-b5c7-753847dd83c6/loki-ingester/0.log" Feb 17 01:53:05 crc kubenswrapper[4805]: I0217 01:53:05.128312 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-jkggq_b0bcda11-009a-411a-8e27-ea83b6953ef9/loki-querier/0.log" Feb 17 01:53:05 crc kubenswrapper[4805]: I0217 01:53:05.279698 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-xv8tz_f16ed9b4-0dca-404a-b943-ccb244e680c0/loki-query-frontend/0.log" Feb 17 01:53:05 crc kubenswrapper[4805]: E0217 01:53:05.786881 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:53:10 crc kubenswrapper[4805]: E0217 01:53:10.787414 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:53:11 crc kubenswrapper[4805]: I0217 01:53:11.888129 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bpcjz" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" probeResult="failure" output=< Feb 17 01:53:11 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 17 01:53:11 crc kubenswrapper[4805]: > Feb 17 01:53:16 crc kubenswrapper[4805]: E0217 01:53:16.788513 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:53:20 crc kubenswrapper[4805]: I0217 01:53:20.898338 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:53:20 crc kubenswrapper[4805]: I0217 01:53:20.972124 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:53:21 crc kubenswrapper[4805]: I0217 01:53:21.143481 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:53:22 crc kubenswrapper[4805]: E0217 01:53:22.789895 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:53:22 crc kubenswrapper[4805]: I0217 01:53:22.833453 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bpcjz" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" containerID="cri-o://271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372" gracePeriod=2 Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.077398 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.077714 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.326997 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.507954 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities\") pod \"e0d1e229-b049-4942-aa7c-e9ebbd074671\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.508008 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ksvb\" (UniqueName: \"kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb\") pod \"e0d1e229-b049-4942-aa7c-e9ebbd074671\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.508101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content\") pod \"e0d1e229-b049-4942-aa7c-e9ebbd074671\" (UID: \"e0d1e229-b049-4942-aa7c-e9ebbd074671\") " Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.510909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities" (OuterVolumeSpecName: "utilities") pod "e0d1e229-b049-4942-aa7c-e9ebbd074671" (UID: "e0d1e229-b049-4942-aa7c-e9ebbd074671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.519446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb" (OuterVolumeSpecName: "kube-api-access-2ksvb") pod "e0d1e229-b049-4942-aa7c-e9ebbd074671" (UID: "e0d1e229-b049-4942-aa7c-e9ebbd074671"). InnerVolumeSpecName "kube-api-access-2ksvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.612108 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.612147 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ksvb\" (UniqueName: \"kubernetes.io/projected/e0d1e229-b049-4942-aa7c-e9ebbd074671-kube-api-access-2ksvb\") on node \"crc\" DevicePath \"\"" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.632031 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0d1e229-b049-4942-aa7c-e9ebbd074671" (UID: "e0d1e229-b049-4942-aa7c-e9ebbd074671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.706597 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mpwzw_8d4abb9e-d062-4155-bb5d-ef34d3ddc282/kube-rbac-proxy/0.log" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.714012 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0d1e229-b049-4942-aa7c-e9ebbd074671-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.842179 4805 generic.go:334] "Generic (PLEG): container finished" podID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerID="271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372" exitCode=0 Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.842217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerDied","Data":"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372"} Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.842241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpcjz" event={"ID":"e0d1e229-b049-4942-aa7c-e9ebbd074671","Type":"ContainerDied","Data":"349da58dba1c5dd20a982b9e6bceef0e3adf7b11eb79d2d37e53e0e6ca9100e5"} Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.842259 4805 scope.go:117] "RemoveContainer" containerID="271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.842395 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpcjz" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.853453 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-mpwzw_8d4abb9e-d062-4155-bb5d-ef34d3ddc282/controller/0.log" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.865504 4805 scope.go:117] "RemoveContainer" containerID="e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.891465 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.904518 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bpcjz"] Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.925860 4805 scope.go:117] "RemoveContainer" containerID="3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.937433 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-frr-files/0.log" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.949915 4805 scope.go:117] "RemoveContainer" containerID="271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372" Feb 17 01:53:23 crc kubenswrapper[4805]: E0217 01:53:23.954629 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372\": container with ID starting with 271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372 not found: ID does not exist" containerID="271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.954660 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372"} err="failed to get container status \"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372\": rpc error: code = NotFound desc = could not find container \"271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372\": container with ID starting with 271eab3f375a93bf5734111e6fc794529694ab20d1f2e0e2375de3b80b2ae372 not found: ID does not exist" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.954681 4805 scope.go:117] "RemoveContainer" containerID="e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be" Feb 17 01:53:23 crc kubenswrapper[4805]: E0217 01:53:23.954986 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be\": container with ID starting with e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be not found: ID does not exist" containerID="e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.955006 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be"} err="failed to get container status \"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be\": rpc error: code = NotFound desc = could not find container \"e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be\": container with ID starting with e317be1a94a7650b2fab1691b25f418f15a50979b755bbd957a11398fc3f35be not found: ID does not exist" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.955019 4805 scope.go:117] "RemoveContainer" containerID="3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb" Feb 17 01:53:23 crc kubenswrapper[4805]: E0217 01:53:23.955370 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb\": container with ID starting with 3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb not found: ID does not exist" containerID="3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb" Feb 17 01:53:23 crc kubenswrapper[4805]: I0217 01:53:23.955390 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb"} err="failed to get container status \"3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb\": rpc error: code = NotFound desc = could not find container \"3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb\": container with ID starting with 3b1b807e45c9203d612a063f14330b91dd33f79591c386cfe696838b25736bbb not found: ID does not exist" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.113498 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-reloader/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.130500 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-reloader/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.166816 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-frr-files/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.171555 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-metrics/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.311710 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-reloader/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.349077 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-frr-files/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.365599 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-metrics/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.366571 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-metrics/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.486527 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-frr-files/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.498635 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-reloader/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.536340 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/controller/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.560471 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/cp-metrics/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.672696 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/frr-metrics/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.727398 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/kube-rbac-proxy/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.760240 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/kube-rbac-proxy-frr/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.796255 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" path="/var/lib/kubelet/pods/e0d1e229-b049-4942-aa7c-e9ebbd074671/volumes" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.962673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/reloader/0.log" Feb 17 01:53:24 crc kubenswrapper[4805]: I0217 01:53:24.979760 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-wml4d_a824b3ba-107f-4f67-bcca-690632e343c2/frr-k8s-webhook-server/0.log" Feb 17 01:53:25 crc kubenswrapper[4805]: I0217 01:53:25.177050 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-8595899c55-2hhkf_1a4b50ae-ecf2-4925-8d51-c9e1d1cdd2e8/manager/0.log" Feb 17 01:53:25 crc kubenswrapper[4805]: I0217 01:53:25.860179 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-676bc65957-7vlsr_8d2f5088-2aea-4d14-96a1-e1b14904efa0/webhook-server/0.log" Feb 17 01:53:25 crc kubenswrapper[4805]: I0217 01:53:25.879136 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m7ccg_f40bf9bc-c85c-4415-99f9-95daf9ad57ca/kube-rbac-proxy/0.log" Feb 17 01:53:26 crc kubenswrapper[4805]: I0217 01:53:26.069366 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8kmsb_dc5ad3ec-0480-4f9f-ac09-1506aa092f49/frr/0.log" Feb 17 01:53:26 crc kubenswrapper[4805]: I0217 01:53:26.589967 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m7ccg_f40bf9bc-c85c-4415-99f9-95daf9ad57ca/speaker/0.log" Feb 17 01:53:31 crc kubenswrapper[4805]: E0217 01:53:31.788162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:53:37 crc kubenswrapper[4805]: E0217 01:53:37.787251 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.224077 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/util/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.436022 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/util/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.452627 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/pull/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.497994 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/pull/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.670841 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/util/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.683149 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/pull/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.718702 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e192n5nz_d8cf55d6-2938-4730-bacd-f6bdbb287fca/extract/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.841682 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/util/0.log" Feb 17 01:53:42 crc kubenswrapper[4805]: I0217 01:53:42.993856 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.031805 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/util/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.055690 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.209807 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.226977 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/extract/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.248533 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fkcxs_35452466-502c-40f8-8b96-bf5ba6de3a8a/util/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.386976 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/util/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.556057 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.565047 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/util/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.581349 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.780957 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/extract/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: E0217 01:53:43.787948 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.827442 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/pull/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.827665 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213l26fn_3cc3d85a-bf6d-4592-a085-dd47efd5331f/util/0.log" Feb 17 01:53:43 crc kubenswrapper[4805]: I0217 01:53:43.964308 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-utilities/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.145724 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-utilities/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.187973 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-content/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.231437 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-content/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.355550 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-utilities/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.422161 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/extract-content/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.542544 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-utilities/0.log" Feb 17 01:53:44 crc kubenswrapper[4805]: I0217 01:53:44.868058 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-utilities/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.043658 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-content/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.057090 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-content/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.059113 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-77nqw_34af9a3c-a732-4e70-b52a-abc52c108a33/registry-server/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.208264 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-utilities/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.248850 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/extract-content/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.451482 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/util/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.697543 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/util/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.759213 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/pull/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.760062 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/pull/0.log" Feb 17 01:53:45 crc kubenswrapper[4805]: I0217 01:53:45.976736 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nzjp8_f6d87408-264b-44dc-a29c-f1d154ce5b77/registry-server/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.376459 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/util/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.405217 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/pull/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.410491 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jcmkq_5a140ec1-85c5-4c6f-86cc-a14c6ecd120e/extract/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.488881 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/util/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.666087 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/pull/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.667696 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/util/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.678339 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/pull/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.879636 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/extract/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.913795 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/util/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.924306 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nv6ks_e80d2a1c-4272-4797-bf0c-03b011ed297f/marketplace-operator/0.log" Feb 17 01:53:46 crc kubenswrapper[4805]: I0217 01:53:46.930996 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca6fr4c_0b16c780-85de-4448-9515-790e38240412/pull/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.099997 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-utilities/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.322406 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-content/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.345767 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-utilities/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.350064 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-content/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.539499 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-utilities/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.543276 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-utilities/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.565175 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/extract-content/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.743555 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-utilities/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.761288 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vw2k2_08eb41f7-10d2-42b7-b96a-998cd213dfe1/registry-server/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.815713 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-content/0.log" Feb 17 01:53:47 crc kubenswrapper[4805]: I0217 01:53:47.823206 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-content/0.log" Feb 17 01:53:48 crc kubenswrapper[4805]: I0217 01:53:48.300150 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-utilities/0.log" Feb 17 01:53:48 crc kubenswrapper[4805]: I0217 01:53:48.444118 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/extract-content/0.log" Feb 17 01:53:48 crc kubenswrapper[4805]: E0217 01:53:48.786061 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:53:48 crc kubenswrapper[4805]: I0217 01:53:48.816237 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xbczx_a5c9f438-05f1-4087-a87b-07d2db71c1e0/registry-server/0.log" Feb 17 01:53:53 crc kubenswrapper[4805]: I0217 01:53:53.077452 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:53:53 crc kubenswrapper[4805]: I0217 01:53:53.078094 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:53:53 crc kubenswrapper[4805]: I0217 01:53:53.078159 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:53:53 crc kubenswrapper[4805]: I0217 01:53:53.079259 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:53:53 crc kubenswrapper[4805]: I0217 01:53:53.079394 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944" gracePeriod=600 Feb 17 01:53:54 crc kubenswrapper[4805]: I0217 01:53:54.172383 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944" exitCode=0 Feb 17 01:53:54 crc kubenswrapper[4805]: I0217 01:53:54.172411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944"} Feb 17 01:53:54 crc kubenswrapper[4805]: I0217 01:53:54.172667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerStarted","Data":"500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe"} Feb 17 01:53:54 crc kubenswrapper[4805]: I0217 01:53:54.172689 4805 scope.go:117] "RemoveContainer" containerID="912a65b142918c2949d3074d386aaf6454393ae4d1a4c32438b02f9d28bf3525" Feb 17 01:53:56 crc kubenswrapper[4805]: E0217 01:53:56.788823 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:54:02 crc kubenswrapper[4805]: I0217 01:54:02.931144 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47ddfbcd-hnb84_c9f5bbbc-6740-427e-90d5-69011b2966cd/prometheus-operator-admission-webhook/0.log" Feb 17 01:54:02 crc kubenswrapper[4805]: I0217 01:54:02.961982 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f47ddfbcd-vsgcg_93be50de-fcd3-41d1-8641-1b7c73cb26ea/prometheus-operator-admission-webhook/0.log" Feb 17 01:54:02 crc kubenswrapper[4805]: I0217 01:54:02.977148 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-xw7l6_841806ee-4049-4561-b025-3af0469f8fb2/prometheus-operator/0.log" Feb 17 01:54:03 crc kubenswrapper[4805]: I0217 01:54:03.122940 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-btvcr_6b7fab38-3b46-42bc-a296-945f451f04f6/perses-operator/0.log" Feb 17 01:54:03 crc kubenswrapper[4805]: I0217 01:54:03.138650 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-rztzq_ec346c4e-f52f-4ee4-9697-e4b95405fe5d/operator/0.log" Feb 17 01:54:03 crc kubenswrapper[4805]: I0217 01:54:03.163706 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-lhfgx_ae33ba11-f42a-4134-be89-fbe93e76f0ae/observability-ui-dashboards/0.log" Feb 17 01:54:03 crc kubenswrapper[4805]: E0217 01:54:03.788593 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:54:08 crc kubenswrapper[4805]: E0217 01:54:08.787922 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:54:16 crc kubenswrapper[4805]: E0217 01:54:16.789022 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:54:19 crc kubenswrapper[4805]: I0217 01:54:19.834376 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5659c765-xsxhh_b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01/manager/0.log" Feb 17 01:54:19 crc kubenswrapper[4805]: I0217 01:54:19.839509 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5659c765-xsxhh_b4e0e1ff-a18a-491c-b98e-f71b8c1a2c01/kube-rbac-proxy/0.log" Feb 17 01:54:22 crc kubenswrapper[4805]: E0217 01:54:22.787663 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:54:31 crc kubenswrapper[4805]: E0217 01:54:31.786490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:54:35 crc kubenswrapper[4805]: E0217 01:54:35.786315 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:54:45 crc kubenswrapper[4805]: I0217 01:54:45.788661 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:54:45 crc kubenswrapper[4805]: E0217 01:54:45.954081 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:54:45 crc kubenswrapper[4805]: E0217 01:54:45.954561 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:54:45 crc kubenswrapper[4805]: E0217 01:54:45.954711 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:54:45 crc kubenswrapper[4805]: E0217 01:54:45.956416 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:54:49 crc kubenswrapper[4805]: E0217 01:54:49.919444 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:54:49 crc kubenswrapper[4805]: E0217 01:54:49.919978 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:54:49 crc kubenswrapper[4805]: E0217 01:54:49.920145 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:54:49 crc kubenswrapper[4805]: E0217 01:54:49.921357 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:54:58 crc kubenswrapper[4805]: E0217 01:54:58.788058 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:55:03 crc kubenswrapper[4805]: E0217 01:55:03.788573 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:55:10 crc kubenswrapper[4805]: E0217 01:55:10.787271 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:55:18 crc kubenswrapper[4805]: E0217 01:55:18.798095 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:55:25 crc kubenswrapper[4805]: E0217 01:55:25.789311 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:55:31 crc kubenswrapper[4805]: E0217 01:55:31.788056 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:55:40 crc kubenswrapper[4805]: E0217 01:55:40.787865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:55:46 crc kubenswrapper[4805]: E0217 01:55:46.788921 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:55:52 crc kubenswrapper[4805]: E0217 01:55:52.788673 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:55:53 crc kubenswrapper[4805]: I0217 01:55:53.077845 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:55:53 crc kubenswrapper[4805]: I0217 01:55:53.077916 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:55:57 crc kubenswrapper[4805]: I0217 01:55:57.838232 4805 generic.go:334] "Generic (PLEG): container finished" podID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerID="9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439" exitCode=0 Feb 17 01:55:57 crc kubenswrapper[4805]: I0217 01:55:57.838346 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" event={"ID":"c521d6b8-b6fe-477e-84ac-db6f9a416901","Type":"ContainerDied","Data":"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439"} Feb 17 01:55:57 crc kubenswrapper[4805]: I0217 01:55:57.839410 4805 scope.go:117] "RemoveContainer" containerID="9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439" Feb 17 01:55:57 crc kubenswrapper[4805]: I0217 01:55:57.936969 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gjqlt_must-gather-hvjjh_c521d6b8-b6fe-477e-84ac-db6f9a416901/gather/0.log" Feb 17 01:55:59 crc kubenswrapper[4805]: E0217 01:55:59.785946 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:56:04 crc kubenswrapper[4805]: E0217 01:56:04.795923 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.149811 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-gjqlt/must-gather-hvjjh"] Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.150439 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="copy" containerID="cri-o://6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3" gracePeriod=2 Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.160511 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-gjqlt/must-gather-hvjjh"] Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.645181 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gjqlt_must-gather-hvjjh_c521d6b8-b6fe-477e-84ac-db6f9a416901/copy/0.log" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.646123 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.756940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9l4\" (UniqueName: \"kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4\") pod \"c521d6b8-b6fe-477e-84ac-db6f9a416901\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.757086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output\") pod \"c521d6b8-b6fe-477e-84ac-db6f9a416901\" (UID: \"c521d6b8-b6fe-477e-84ac-db6f9a416901\") " Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.764389 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4" (OuterVolumeSpecName: "kube-api-access-5v9l4") pod "c521d6b8-b6fe-477e-84ac-db6f9a416901" (UID: "c521d6b8-b6fe-477e-84ac-db6f9a416901"). InnerVolumeSpecName "kube-api-access-5v9l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.860142 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v9l4\" (UniqueName: \"kubernetes.io/projected/c521d6b8-b6fe-477e-84ac-db6f9a416901-kube-api-access-5v9l4\") on node \"crc\" DevicePath \"\"" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.950121 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-gjqlt_must-gather-hvjjh_c521d6b8-b6fe-477e-84ac-db6f9a416901/copy/0.log" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.950826 4805 generic.go:334] "Generic (PLEG): container finished" podID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerID="6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3" exitCode=143 Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.950891 4805 scope.go:117] "RemoveContainer" containerID="6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.950913 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gjqlt/must-gather-hvjjh" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.953277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c521d6b8-b6fe-477e-84ac-db6f9a416901" (UID: "c521d6b8-b6fe-477e-84ac-db6f9a416901"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.963853 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c521d6b8-b6fe-477e-84ac-db6f9a416901-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 01:56:06 crc kubenswrapper[4805]: I0217 01:56:06.973059 4805 scope.go:117] "RemoveContainer" containerID="9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439" Feb 17 01:56:07 crc kubenswrapper[4805]: I0217 01:56:07.058373 4805 scope.go:117] "RemoveContainer" containerID="6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3" Feb 17 01:56:07 crc kubenswrapper[4805]: E0217 01:56:07.058900 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3\": container with ID starting with 6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3 not found: ID does not exist" containerID="6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3" Feb 17 01:56:07 crc kubenswrapper[4805]: I0217 01:56:07.058943 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3"} err="failed to get container status \"6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3\": rpc error: code = NotFound desc = could not find container \"6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3\": container with ID starting with 6ffa4c460e18760e0ecdc0c4a70ef6fa105410d3c58984963992b0be5d67d9b3 not found: ID does not exist" Feb 17 01:56:07 crc kubenswrapper[4805]: I0217 01:56:07.058972 4805 scope.go:117] "RemoveContainer" containerID="9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439" Feb 17 01:56:07 crc kubenswrapper[4805]: E0217 01:56:07.059510 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439\": container with ID starting with 9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439 not found: ID does not exist" containerID="9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439" Feb 17 01:56:07 crc kubenswrapper[4805]: I0217 01:56:07.059550 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439"} err="failed to get container status \"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439\": rpc error: code = NotFound desc = could not find container \"9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439\": container with ID starting with 9220d3f3cf85fed736166618a350d24d0203ee339521a660fcd3e1a6bfca8439 not found: ID does not exist" Feb 17 01:56:08 crc kubenswrapper[4805]: I0217 01:56:08.816464 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" path="/var/lib/kubelet/pods/c521d6b8-b6fe-477e-84ac-db6f9a416901/volumes" Feb 17 01:56:12 crc kubenswrapper[4805]: E0217 01:56:12.789113 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:56:17 crc kubenswrapper[4805]: E0217 01:56:17.799074 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:56:23 crc kubenswrapper[4805]: I0217 01:56:23.077153 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:56:23 crc kubenswrapper[4805]: I0217 01:56:23.077838 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:56:24 crc kubenswrapper[4805]: E0217 01:56:24.802009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:56:30 crc kubenswrapper[4805]: E0217 01:56:30.788913 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:56:35 crc kubenswrapper[4805]: E0217 01:56:35.787370 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:56:44 crc kubenswrapper[4805]: E0217 01:56:44.803987 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:56:49 crc kubenswrapper[4805]: E0217 01:56:49.786457 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.077506 4805 patch_prober.go:28] interesting pod/machine-config-daemon-ckkzk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.078094 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.078142 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.078668 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe"} pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.078714 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerName="machine-config-daemon" containerID="cri-o://500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" gracePeriod=600 Feb 17 01:56:53 crc kubenswrapper[4805]: E0217 01:56:53.203033 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.573808 4805 generic.go:334] "Generic (PLEG): container finished" podID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" exitCode=0 Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.573861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" event={"ID":"2531e0b8-5ad4-4dd3-86b9-bd6dec526041","Type":"ContainerDied","Data":"500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe"} Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.573905 4805 scope.go:117] "RemoveContainer" containerID="72cba03c5e9d28d8f63995ddf7a0a97ce08f7e75e3252cd3b8bd494acd70d944" Feb 17 01:56:53 crc kubenswrapper[4805]: I0217 01:56:53.574590 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:56:53 crc kubenswrapper[4805]: E0217 01:56:53.574863 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:56:56 crc kubenswrapper[4805]: E0217 01:56:56.790183 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:57:03 crc kubenswrapper[4805]: E0217 01:57:03.787133 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:57:05 crc kubenswrapper[4805]: I0217 01:57:05.785151 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:57:05 crc kubenswrapper[4805]: E0217 01:57:05.786211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:57:09 crc kubenswrapper[4805]: E0217 01:57:09.787593 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:57:16 crc kubenswrapper[4805]: I0217 01:57:16.785274 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:57:16 crc kubenswrapper[4805]: E0217 01:57:16.786407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:57:16 crc kubenswrapper[4805]: E0217 01:57:16.787973 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:57:20 crc kubenswrapper[4805]: E0217 01:57:20.788393 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:57:27 crc kubenswrapper[4805]: I0217 01:57:27.784442 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:57:27 crc kubenswrapper[4805]: E0217 01:57:27.785444 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:57:28 crc kubenswrapper[4805]: E0217 01:57:28.795785 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:57:32 crc kubenswrapper[4805]: E0217 01:57:32.796722 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:57:42 crc kubenswrapper[4805]: I0217 01:57:42.785040 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:57:42 crc kubenswrapper[4805]: E0217 01:57:42.787193 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:57:43 crc kubenswrapper[4805]: E0217 01:57:43.789163 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:57:44 crc kubenswrapper[4805]: E0217 01:57:44.800216 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:57:55 crc kubenswrapper[4805]: I0217 01:57:55.784872 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:57:55 crc kubenswrapper[4805]: E0217 01:57:55.785785 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:57:56 crc kubenswrapper[4805]: E0217 01:57:56.790625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:57:57 crc kubenswrapper[4805]: E0217 01:57:57.788147 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:58:10 crc kubenswrapper[4805]: I0217 01:58:10.785780 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:58:10 crc kubenswrapper[4805]: E0217 01:58:10.787228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:58:10 crc kubenswrapper[4805]: E0217 01:58:10.787977 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:58:11 crc kubenswrapper[4805]: E0217 01:58:11.788353 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:58:22 crc kubenswrapper[4805]: E0217 01:58:22.790564 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:58:22 crc kubenswrapper[4805]: E0217 01:58:22.791165 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:58:25 crc kubenswrapper[4805]: I0217 01:58:25.784416 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:58:25 crc kubenswrapper[4805]: E0217 01:58:25.785102 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:58:34 crc kubenswrapper[4805]: E0217 01:58:34.806504 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:58:34 crc kubenswrapper[4805]: E0217 01:58:34.806625 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:58:38 crc kubenswrapper[4805]: I0217 01:58:38.786161 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:58:38 crc kubenswrapper[4805]: E0217 01:58:38.787691 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:58:48 crc kubenswrapper[4805]: E0217 01:58:48.788257 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:58:48 crc kubenswrapper[4805]: E0217 01:58:48.788760 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:58:52 crc kubenswrapper[4805]: I0217 01:58:52.785501 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:58:52 crc kubenswrapper[4805]: E0217 01:58:52.786154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:59:00 crc kubenswrapper[4805]: E0217 01:59:00.788670 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:59:00 crc kubenswrapper[4805]: E0217 01:59:00.789472 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:59:06 crc kubenswrapper[4805]: I0217 01:59:06.785055 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:59:06 crc kubenswrapper[4805]: E0217 01:59:06.785810 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:59:14 crc kubenswrapper[4805]: E0217 01:59:14.801238 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:59:15 crc kubenswrapper[4805]: E0217 01:59:15.788653 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:59:18 crc kubenswrapper[4805]: I0217 01:59:18.786010 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:59:18 crc kubenswrapper[4805]: E0217 01:59:18.786911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.348990 4805 scope.go:117] "RemoveContainer" containerID="dde19cbe546ab9eda697accee7ea28c08259ce8fe4be54517e3cfed732bcceae" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.377092 4805 scope.go:117] "RemoveContainer" containerID="10ff366d67f3fafac353bf2ffa95f22f20ca72ece5be63e03068e854e4549009" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.432669 4805 scope.go:117] "RemoveContainer" containerID="88205f038e7fd57f4988c0bed83bea11eecb88b07afb083a64fad48d37681a24" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.456931 4805 scope.go:117] "RemoveContainer" containerID="b41c93382af16f9136f77f0fb2d1d8e12f8c2267aad15475fec49f5bf351a0fb" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.484288 4805 scope.go:117] "RemoveContainer" containerID="b940aba2478ad9d0a424a39448533cf359deb636c2bb850101b42d24f02e9f02" Feb 17 01:59:21 crc kubenswrapper[4805]: I0217 01:59:21.532397 4805 scope.go:117] "RemoveContainer" containerID="dbfad150ca38f502126278a5f46e0bd9732dee09dc02c303d9c257a8805f0288" Feb 17 01:59:28 crc kubenswrapper[4805]: E0217 01:59:28.789208 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:59:29 crc kubenswrapper[4805]: E0217 01:59:29.786204 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:59:33 crc kubenswrapper[4805]: I0217 01:59:33.785101 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:59:33 crc kubenswrapper[4805]: E0217 01:59:33.785556 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:59:40 crc kubenswrapper[4805]: E0217 01:59:40.789835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:59:41 crc kubenswrapper[4805]: E0217 01:59:41.787897 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:59:44 crc kubenswrapper[4805]: I0217 01:59:44.805670 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:59:44 crc kubenswrapper[4805]: E0217 01:59:44.807129 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 01:59:54 crc kubenswrapper[4805]: I0217 01:59:54.801001 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 01:59:54 crc kubenswrapper[4805]: E0217 01:59:54.937084 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:59:54 crc kubenswrapper[4805]: E0217 01:59:54.937165 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 17 01:59:54 crc kubenswrapper[4805]: E0217 01:59:54.937365 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt2vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-tvlw9_openstack(70acc4f3-ace6-4366-9270-6bd9242da91b): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:59:54 crc kubenswrapper[4805]: E0217 01:59:54.938598 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-tvlw9" podUID="70acc4f3-ace6-4366-9270-6bd9242da91b" Feb 17 01:59:55 crc kubenswrapper[4805]: E0217 01:59:55.896047 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:59:55 crc kubenswrapper[4805]: E0217 01:59:55.896529 4805 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 01:59:55 crc kubenswrapper[4805]: E0217 01:59:55.896709 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h5f5h87h5b8h4h654h8dh66hd8h5ddh67ch65ch657h5f4hb5h56dh5fhb8h5dbh66fh677h567hb5h5d5h56bh55ch68dh67fhdch64dh5c9h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(78cfb873-5ac3-472d-91e4-299e5df21da3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 01:59:55 crc kubenswrapper[4805]: E0217 01:59:55.897970 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="78cfb873-5ac3-472d-91e4-299e5df21da3" Feb 17 01:59:56 crc kubenswrapper[4805]: I0217 01:59:56.788777 4805 scope.go:117] "RemoveContainer" containerID="500fc987a70aee0c293246fb6dc7eff6b95a842f049e77a883f59f52d233c3fe" Feb 17 01:59:56 crc kubenswrapper[4805]: E0217 01:59:56.789969 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-ckkzk_openshift-machine-config-operator(2531e0b8-5ad4-4dd3-86b9-bd6dec526041)\"" pod="openshift-machine-config-operator/machine-config-daemon-ckkzk" podUID="2531e0b8-5ad4-4dd3-86b9-bd6dec526041" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.170820 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv"] Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171802 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171818 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171838 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="gather" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171846 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="gather" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171873 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171881 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171894 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="copy" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171901 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="copy" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171917 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171926 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171950 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171958 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171966 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171976 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.171989 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.171997 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.172011 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172019 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.172037 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172045 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="extract-content" Feb 17 02:00:00 crc kubenswrapper[4805]: E0217 02:00:00.172063 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172070 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="extract-utilities" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172309 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="gather" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172347 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d1e229-b049-4942-aa7c-e9ebbd074671" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172367 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a72d8a88-2ae5-497c-a5b9-bf12faedea45" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172384 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0158c6c9-711a-47a9-baae-6360bbed01fd" containerName="registry-server" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.172401 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c521d6b8-b6fe-477e-84ac-db6f9a416901" containerName="copy" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.173654 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.177842 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.178418 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv"] Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.186235 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.285624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzg2g\" (UniqueName: \"kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.285720 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.285783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.387518 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.387615 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.387716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzg2g\" (UniqueName: \"kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.389404 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.399133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.422216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzg2g\" (UniqueName: \"kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g\") pod \"collect-profiles-29521560-wldnv\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:00 crc kubenswrapper[4805]: I0217 02:00:00.523721 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:01 crc kubenswrapper[4805]: I0217 02:00:01.063023 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv"] Feb 17 02:00:01 crc kubenswrapper[4805]: W0217 02:00:01.075431 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb284ecdc_09d8_4d02_8c59_be03472acf92.slice/crio-1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10 WatchSource:0}: Error finding container 1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10: Status 404 returned error can't find the container with id 1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10 Feb 17 02:00:02 crc kubenswrapper[4805]: I0217 02:00:02.056010 4805 generic.go:334] "Generic (PLEG): container finished" podID="b284ecdc-09d8-4d02-8c59-be03472acf92" containerID="5b822815f481a3bfbb4fd887e86174b53f11d049f2d8eaeee11d1228a48cdf75" exitCode=0 Feb 17 02:00:02 crc kubenswrapper[4805]: I0217 02:00:02.056105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" event={"ID":"b284ecdc-09d8-4d02-8c59-be03472acf92","Type":"ContainerDied","Data":"5b822815f481a3bfbb4fd887e86174b53f11d049f2d8eaeee11d1228a48cdf75"} Feb 17 02:00:02 crc kubenswrapper[4805]: I0217 02:00:02.056176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" event={"ID":"b284ecdc-09d8-4d02-8c59-be03472acf92","Type":"ContainerStarted","Data":"1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10"} Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.548059 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.678537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume\") pod \"b284ecdc-09d8-4d02-8c59-be03472acf92\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.678718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume\") pod \"b284ecdc-09d8-4d02-8c59-be03472acf92\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.678859 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzg2g\" (UniqueName: \"kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g\") pod \"b284ecdc-09d8-4d02-8c59-be03472acf92\" (UID: \"b284ecdc-09d8-4d02-8c59-be03472acf92\") " Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.679756 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume" (OuterVolumeSpecName: "config-volume") pod "b284ecdc-09d8-4d02-8c59-be03472acf92" (UID: "b284ecdc-09d8-4d02-8c59-be03472acf92"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.685596 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b284ecdc-09d8-4d02-8c59-be03472acf92" (UID: "b284ecdc-09d8-4d02-8c59-be03472acf92"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.686677 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g" (OuterVolumeSpecName: "kube-api-access-nzg2g") pod "b284ecdc-09d8-4d02-8c59-be03472acf92" (UID: "b284ecdc-09d8-4d02-8c59-be03472acf92"). InnerVolumeSpecName "kube-api-access-nzg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.781720 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b284ecdc-09d8-4d02-8c59-be03472acf92-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.781758 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b284ecdc-09d8-4d02-8c59-be03472acf92-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 02:00:03 crc kubenswrapper[4805]: I0217 02:00:03.781773 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzg2g\" (UniqueName: \"kubernetes.io/projected/b284ecdc-09d8-4d02-8c59-be03472acf92-kube-api-access-nzg2g\") on node \"crc\" DevicePath \"\"" Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.082486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" event={"ID":"b284ecdc-09d8-4d02-8c59-be03472acf92","Type":"ContainerDied","Data":"1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10"} Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.082535 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521560-wldnv" Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.082552 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1615398597d8dcc7d7ae4df462aeedf747c89cdbb7716e14f6272887af0a7a10" Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.669273 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp"] Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.687566 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521515-wqbhp"] Feb 17 02:00:04 crc kubenswrapper[4805]: I0217 02:00:04.800851 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4e61b3-9a4a-497e-b65c-b61b5b09feb6" path="/var/lib/kubelet/pods/ac4e61b3-9a4a-497e-b65c-b61b5b09feb6/volumes"